Teradata Corporation (TDC)
NYSE: TDC · Real-Time Price · USD
25.78
-0.69 (-2.61%)
At close: Apr 28, 2026, 4:00 PM EDT
25.50
-0.28 (-1.09%)
After-hours: Apr 28, 2026, 7:00 PM EDT
← View all transcripts

Status Update

May 29, 2024

Pranay Dave
Director of Product Marketing, Teradata

Analyze that with GenAI. I'm joined by my colleague, Gary Class, as well as we also have my colleague from AWS, Gopalan. So let's go ahead. Gary, would you like to give a brief introduction to what we have today?

Gary Class
Industry Strategist for Financial Services, Teradata

Sure, thanks. Yeah, we'll be covering a lot of interesting topics here today. Once again, my name is Gary Class. I'm the Industry Strategist for Financial Services at Teradata. I'll be covering the background about customer complaints and how to satisfactorily address them to drive customer task resolution. After that, we'll have a demonstration by Pranay Dave, who, introduced himself, but he's the Director of Product Marketing at Teradata. A step-by-step implementation of how to use ClearScape Analytics to quickly analyze and resolve customer complaints. And then Balaji Gopalan, Principal Solutions Architect from Amazon Web Services, will talk about enabling generative AI and leveraging the latest and greatest generative AI models with Amazon Bedrock. And then finally, we'll have a Q&A session. So submit your questions in the Q&A chat, which you should have access to.

So as we go along, if you have a question, please put it in the chat, and we'll address it in the Q&A section. Customer experience is even more important than it ever was before. It was always important, and we need to recognize that disjointed complaint resolution costs millions of dollars to enterprises and can contribute to the loss of customers. 90% of customers put resolving my issue as the number one thing they want their firm, certainly their bank, to do. For customers that have a successful issue resolution, they're 3.5 times more likely to buy another product. Basically, if customers are dissatisfied, they're not gonna buy again, buy another product. Roughly one-third of customers say that they will leave their institution if their experience is poor and the bank does a very poor job of resolution.

So today we're gonna talk about complaint resolution and challenges with using unstructured customer data in banks in the U.S. But fortunately, there's a great data source for us to explore. The Consumer Financial Protection Bureau is a regulatory agency that oversees depository institutions in the United States. They receive about 1.3 million complaints directly from customers every year, customers in their own words, and about 64% of those ultimately were sent back to the bank or financial institution for further review. So that gives us a great source of verbatim customer complaints, very analogous to what banks get, either through email or a phone call, and once again, both of those are unstructured data, which is part of the business problem that we'll address today.

The goal of the bank is to identify the presence of a complaint in an inbound communication and make sure that that complaint is resolved satisfactorily. Unfortunately, today, that's often a very manual process. We're gonna talk about, and Pranay will demonstrate how that can be automated and streamlined. So for me, a very important idea is this idea of customer task. What is it that the customer is coming to the bank to resolve? What's the banking issue within their customer journey that they want to resolve? And we're gonna look for two key things. The first is customer task effectiveness. Did the bank resolve the customer's task to their satisfaction? Did we identify the issue and resolve it satisfactorily? And related to that is this idea of customer task efficiency. What was the elapsed time, effort, and cost associated with the resolution of that customer task?

Both very important high-level metrics that banks need to track. With that, it's back to Pranay.

Pranay Dave
Director of Product Marketing, Teradata

Yeah, thanks a lot, Gary. So, as Gary has described, right, this customer complaint is a very important problem, and in order to develop a solution for this customer complaints, we have used generative AI. Now, here you see different types of generative AI applications, and our solution is using generative AI to improve the institutional knowledge about the companies by using various analytics. The solution also acts as a trusted advisor by advising on how to resolve the complaints. And in addition, we are also automating the process through the use of virtual assistants. So at Teradata, we also ensure that our solution follows the principle of trusted AI. So complaint analysis is all about people because all the complaints are coming from people, so it's very important to correctly analyze and understand the complaint.

The complaint analysis should also be transparent, it should be bias-free, it should be explainable, and in addition, it should also be able to create value for both the enterprise as well as the customers. Let me also show you the architecture of the demo. The complaints can come from various channels and formats. For example, they can come as text, as well as they can come from voice, they can come from web channels, they can also come through a chatbot. It is important to harmonize and integrate all the data, which we are doing it in VantageCloud. Then we are using Teradata ClearScape Analytics, as well as AWS Bedrock, for various analytics, such as sentiment analysis, complaint classification, clustering, topic modeling, summarization, as well as we are also doing speech analysis.

The output of this analytics is then used to augment the customer 360-degree, so this in turn, it creates signals which can help operationalize the customer complaint resolution. Now, before I show you the demo, let me briefly mention that, I will show you two demo scenarios. So the first one is a data science-oriented demo, where I'll be using a Jupyter Notebook as the front end, which is connected to VantageCloud and which is then leveraging Amazon Bedrock. In the second scenario, which I'll show you, I'll be showing you on how to operationalize this use case from a business persona point of view. So I'll be using a BI tool as a front end, which is connected to VantageCloud and which is also leveraging Amazon Bedrock. Now, let me switch to the demo.

Let me now start with the data science demo using the Jupyter Notebooks. I've already executed the notebook so that we can optimize some time. Let me start with the complaint classification, which is used to predict if any incoming customer communication is a complaint or not. We will start by importing some Python libraries, and the important libraries here are Teradata ML, which is for all the AI and ML functionalities which are available within VantageCloud and ClearScape Analytics. Then we have Bedrock, which we'll be using to leverage the large language models. Now, let us connect to VantageCloud and then get some data. We will also need to configure the AWS access key. Now, let us also initialize the Bedrock, where you can choose the large language model of your choice.

We have chosen to use Mistral Large language model, and I will store this Bedrock configuration in a variable called as Mistral. Now, let me just rapidly show you on what is Mistral. So Mistral AI is a very powerful large language model, and the reason why we have chosen this is because it brings the benefit of transparency and trust, which is very important when you are analyzing customer complaints. In addition, it can also analyze complaints in various languages. Now, let me switch back to the Jupyter Notebook, and here you can see a snapshot of the data. Here I'm using Teradata ML function DataFrame on this table called as Consumer Complaints. I have a variable called TDF, which is a pointer to this table.

Now, the consumer complaint has got information on the date of the complaint, the type of the product, the issues, as well as all the detailed, complaint text. And as you can see that the complaints can be very long and is also expressed in natural language. Now, at first look, everything looks like a complaint. However, we can use the power of large language model to determine if it is an actual complaint or not. So here I'm taking the Teradata DataFrame , which we have created earlier, and I'm taking the complaint and then doing prompt engineering. And in the prompt engineering, I'm also mentioning to classify the consumer narrative into complaint or not a complaint. In addition, I'm also asking it to give a reasoning.

I will pass this DataFrame to the Mistral variable, which we have created earlier, and as an output, I will get the prediction if the consumer narrative is a complaint or not a complaint. In addition, I will also get the reasoning, which is also called as the Chain of Thought reasoning. Now, let us look at the results of this, and let us look at the first record, and it says that the consumer was checking the credit report, and it seems that it was incorrect. It also says that he's a victim of data breach, and a credit bureau has messed up with the investigation. This is a very serious complaint, and Bedrock is doing a very nice job in predicting it correctly as a complaint. In addition, you also get a Chain of Thought reasoning on why it was predicted as a complaint.

Now, this is amazing, and you can already see the power of combining data in Teradata VantageCloud in Amazon Bedrock. Let us also see an example of a consumer narrative which is not classified as a complaint. So here we have a consumer narrative which says that he has received inquiry letters and not sure what is the source of these inquiries. And this is correctly predicted not as a complaint because it has nothing to do with the bank's product. The Chain of Thought reasoning thinks that it is a concern and not a complaint. Now, once you have the classification, you can also make some additional analytics, such as number of complaints versus non-complaints. All right, so now let's move to the next analytics, which is the sentiment analysis. Now, generally, the sentiment of a complaint is negative.

However, the objective of this notebook is to find the sentiment by financial products. So once again, we will start by importing our packages and then connecting to Vantage and Bedrock. Now, this time, we are using the large language model called as AI21, which is also called a Jurassic-2, and we found that it works great for sentiment analysis. You can see the advantage of Bedrock over here, that you can use the large language models of your choice. Now, as I showed you earlier, I'm also creating a DataFrame , which is a pointer to the consumer complaints table. Then we are using the records in this table, and I'm also doing a prompt engineering to classify the sentiment as positive, negative, or neutral. And then I use the AI21 model to predict the sentiment of the consumer complaint.

So as a result, I get the sentiment of the customer complaints, which will be mostly negative, though there'll be occasionally also some non-negative sentiments, but it is very rare. You can also do some additional analytics, such as word cloud, to find out the main words which are occurring in the negative sentiment reviews. Now, we can also use the Teradata in-database functions, such as the OrdinalE ncoder, which will separate the columns into negative, neutral, and positive sentiments. The in-database functions, they run within the Teradata database, so they run at super scale and very high performance. And then using another in-database function called as ColumnT ransformer, which will give a value of minus one to the negative sentiment. Now we can plot the sentiment over years by product type.

So here on the x-axis, you have the year of the complaint, and on the y-axis, you have the total of all the sentiments. The different lines are various product types. So here you can see that the credit card and the prepaid card have a sentiment which is decreasing every year. Now, such an information can be very valuable to the bank to identify the areas of improvement. So in this notebook, you saw the power of combining Vantage in-database functions and AWS Bedrock to get very cutting-edge insights. All right, so hopefully you are enjoying this demo till now, but we have just got started, and if you have any questions, please put them in the chat. Now, let me go to another cool generative AI application of text clustering.

The objective of clustering is to group, to get similar-looking complaints, and this can help efficiently manage all different complaints. Now, here we'll be using Teradata in-database functions k-means, which provides clustering at very large scale. In addition, we'll be also using embeddings, which will help us translate text data into numerical vectors. As usual, we start by connecting to Vantage and creating a data frame, which is pointing to the complaints table. Now, before we do clustering, we can also do some kind of data exploration to find the statistics on complaint, such as the number of complaints by year, as well as number of complaints by months, as well as the complaints by product types.

Now, let me set up the connection to AWS, and here I'm using the embedding function of Bedrock, which will allow to convert all the customer complaint data into numerical vectors. So for example, here I have a complaint text, and then I have the embeddings, which is all the numerical vectors for each text. And the length of this vector is 1,535. So basically, the text got converted into 1,535 numerical columns, which is a very sophisticated way to represent text. Right, so now we have the numerical representation of text, so we can use the Teradata Vantage in DB function, called as TD_ KMeans, which will take these embeddings as input and make clusters. Now, in DB function, TD_ KMeans is very scalable and runs on very large number of columns with great performance.

As a result, each customer complaint is assigned to a cluster. Now, we can also visualize this cluster, but first, we can convert the embeddings into a 2D representation using t-SNE, which is a dimensionality reduction function, and then we can plot the 2D visual using a scatter plot. Each dot here is a complaint, and the color of the dot corresponds to a cluster. When I hover over these dots, I can see also the complaints text, and this could be useful to interpret the clusters. For example, you can see that the top cluster over here corresponds to the complaints which are related to student loan. Such a clustering is very useful to group all similar complaints, and this can help us to manage effectively the responses of similar-looking complaints.

So here in this notebook, you saw the power of Teradata Vantage ClearScape Analytics in DB functions, as well as AWS Bedrock, to make generative AI actionable and operational. Now we have the demo of all these analytics in Jupyter Notebook, but let me now switch to the business persona demo, and here we are using Power BI as a front end, which is connected to Vantage, which is then also leveraging the Bedrock APIs. Here you can see on how this use case can be operationalized for a business user. On the left-hand side, you can see all different analytics which I've shown you previously, and let me go through some of them, and let me start with the topic modeling. In the previous analytics of clustering, you saw that it allows to group similar complaints together. However, we had to manually interpret the clusters.

Now, with topic modeling, you can use the large language models to interpret the complaints topic automatically. So here you see that, most of the complaints are related to mortgage application, report inaccuracy, and payment trouble. In addition, you can also view all the details in the table below, which has got the complaint, which has also the topic of the complaint, as well as the reasoning with the chain of thought on how the topic was predicted. Now, let me show you the next analytics, which is, complaint summarization, and this is very useful for customer complaint management, as it helps you to quickly understand the complaint. For example, here we see a very long complaint. However, we also have a very efficient summary just in few words. Now, such a summary is very useful for, efficiently understanding the complaint.

In addition, we also have the reasoning on how the summary was created. You can also create additional visualization, which is shown over here, which has got the complaint narrative length versus the length of the summary. And here you can see that some complaints can be very long, like almost near 1,000+ words. However, they are efficiently summarized in less than just 50 words. Now, let me go to the last analytics on Customer 360-degree. And as a reminder, please do not forget to put your questions in the chat. Now, Customer 360-degree is all about how do you operationalize all these insights? Now, we can use the outputs of large language model and ClearScape Analytics in-database functions to augment the Customer 360-degree.

So here you see the Customer 360-degree, which has got the customer details, such as the ID, the name, the address, and other banking information. Now, if I move to the right, I will also see the insights which are coming from generative AI. So here we have augmented the Customer 360-degree data with the LLM outputs, such as sentiment of the complaint, the topic of the complaint, as well as the summary. And based on this information, we can also derive the bank strategy on the action to take. So in this way, we have used the data, we have used generative AI to create a signal, which is the bank strategy, and this signal can help in operationalizing generative AI. We have been also experimenting with speech analytics, as complaints can also come through voice calls.

Generative AI can help convert the voice data into various insights which are required for the customer complaints analytics. So to conclude the demo, you saw how to use the power of Vantage ClearScape Analytics, AWS Bedrock, to leverage generative AI to efficiently analyze customer complaints, get actionable insights, and operationalize generative AI at scale. All right, so, hopefully you enjoyed that demo. If you have questions, please put in the chat so that we can answer to you very efficiently. So here are the key takeaways. So first of all, Teradata customers can leverage their existing data, such as customer 360-degree data, which is very important to operationalize such use cases, as well as the large language models for this very cutting-edge and innovative use case.

We also saw that AWS Bedrock provides a wide variety of large language models to solve enterprise-level use cases. With Teradata Vantage, you can implement trusted AI by combining the power of trusted data as well as explainable large language models. Now, I would also like to give a pointer to this particular document. You'll find the links also available. This is a use case which shows cost savings and benefits of ClearScape Analytics. This document is done by Forrester Consulting, and it shows how one of our Teradata customers is driving significant profits, driving productivity improvements, as well as customer engagement using ClearScape Analytics, as well as Vantage. I would also like to point out to this particular asset.

So this is the demo what I showed you, is also available at clearscape.teradata.com, so you can also try it out. So this particular environment is what we call it as ClearScape Analytics Experience, has got 80-plus demos which you can try it out. There's already some test data, but you can also work with your own data... which will give you a concrete feeling of the power of ClearScape Analytics as well as our partners. Now, I will pass it on to Balaji, who is going to explain all the magic about Amazon Bedrock. So Balaji, over to you.

Balaji Gopalan
Principal Solutions Architect, Amazon Web Services

Thank you, Pranay, and thank you, audience, for joining this wonderful session. I'm so excited to speak here. My name is Balaji Gopalan, Principal Solutions Architect at AWS. We work with customers in terms of how do you architect using AWS and partner capabilities. And I wanna talk to you about like, Pranay talked about the-- she shared the demo. Gary talked about some of the reasons how you customers can use it. I wanna say, like, how Bedrock... how to think about Bedrock, right? What is Bedrock, and how does it fit into the entire picture that you saw today? With that, so with Amazon Bedrock is a fully managed service that offers a choice of, like, highly performing foundation models, like from leading companies.

Like, you find companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral, and being able to use these foundational models to build broad set of capabilities that you would need to build your generative applications. But the key thing is to simplify the development while maintaining privacy and security, right? So with Amazon Bedrock's comprehensive capabilities, you can experiment with a wide range of these top FMs. You can customize them privately with your own data using techniques such as, like, fine-tuning or retrieval augmented generation, or RAG. And you can also use, like, these managed agents, right? That can execute complex business tasks, right? You can execute complex business tasks as a multi-step task and let the LLM actually process it and solve the problem, right? And be able to connect to those different business systems that you may internally have.

And the nice thing is also it's serverless. Like, Amazon Bedrock is serverless, so you don't need to necessarily, like, manage infrastructure, and you can securely, like, integrate and deploy generative AI applications, right? I think the key theme here is privacy and customization, right? Customize securely to, switching this previous one. So I just wanna talk about choice of, models, right? We see is how you experiment in a way. If you are—take a step back, it's like early days started, so customers wanna move fast. At the same time, generative AI is also evolving quickly, and there are newer options and innovations happening almost practically daily. And in times of this unknowable, the ability is the most valuable tool. Like, customers wanna be able to experiment quickly, deploy, iterate, and pivot quickly, right?

So they wanna choose the latest and greatest FM and immediately embrace them, whatever comes tomorrow, right? So that's why this gives them that option. If you wanna move to the next one. So we have many of those, large FMs that we talked about before. And the other point is, like, okay, you're choosing those models, but under what criteria will you decide? So this is where model evaluation, that's part of Amazon Bedrock, helps you with that. You could use automatic or human-based evaluation methods. You actually bring your own data sets, right? You can bring your own data sets for that evaluation, and you can use, like, metrics that can guide you in this. If you wanna move to the next slide. Thank you. I think the trend here is your data, right? If you use generic generative AI model or generally available, right? Right.

Is how your data of your customers to aid in that. So coming up, so your capabilities like compute from storage, it's really SageMake r, which is our mission. What you saw today with, like, Amazon behind the scenes to bring that differentiated capabilities, differentiates that to innovate within the gen AI space. That's the key theme to keep in mind. And the other thing I would also say is things around, like, we hear things around, like, responsible AI. How do you enforce responsible AI? It's with your data. There are guardrails that are available within Bedrock, where you can define those policies, right? You can specify topics to avoid, and the service automatically detects and prevents user queries and FM responses that fall in that category, right? You can also reduce, like. You can have thresholds for toxicity, offensive languages.

So basically, if you have a paradigm that's more relevant to your application from a responsible AI perspective, you have mechanism to enforce those guardrails through those guardrails from Amazon Bedrock. And then the other key theme is customers want to secure their data. It's their data is of value, so security and privacy are key to that, right? So, in with Amazon Bedrock, right, very similar to how other AWS services, the data is secure, right? None of customers' data is used to train the original base FMs. And when you fine-tune those models, a private version is used, and it is within your environment, right? And all of the data is encrypted in rest, transit, and is it all within your VPC, right? And the same access controls that you have are enforceable, right?

And there's also, like, a lot of regulatory compliances that Bedrock satisfies. So if you're thinking about GDPR, SOC, or HIPAA compliance, those really help. And we have a lot of financial services customers. I had a couple of customer examples that we could highlight, but this space, like NatWest, is actually using Well, NatWest Group is a leading bank in U.K., serving over, like, 19 million and supporting families and communities. They were able to use this in combination to build a next-gen financial services platform to combat financial crime. And there are, like, many of these customer examples. You can go to the website, and you can see. And in terms of closing thought, I wanna see, like, where can you get started? So if you go to the next one, follow these links.

Like, the first one, if you wanna take scan, how do you—can you get started with Bedrock? Or if you want a step-by-step tutorial, you can go to the next one. And if you want a deep dive with a hands-on workshop, click on the third one or check out on the third. And the demo and more of these demo, right, we're combining Teradata, right? The GenAI capabilities that you can use with your data. Take a look at that one. It also has the one that, Pranay presented today, which you will find very valuable. So key theme is you can actually start innovating today with your data that you have in Teradata, along with Amazon Bedrock. And thank you.

Pranay Dave
Director of Product Marketing, Teradata

Thanks a lot, Balaji. That was fantastic, and we have a lot of questions which have come in. So I'll try to pick a few of them and try to answer them. So let me start with this question, which is coming from Steven DeLoy. And he says that, "What was the source of the bank action guidance?" So to answer that question, what we have done is, we have used the large language model fine-tuning. So fine-tuning helps you to decide what outputs do you want. So with fine-tuning, you can say that, "Okay, this is the sentiment, this is the complaint, this is the summary of the complaint, and what action would you like to take?" So this, with this data, you can fine-tune a model.

So we can fine-tune Mistral AI, which then gave us the bank action, what you saw in that particular call. So very interesting question, and let me also take this question, and I think this question, if, Gary, if you can answer. So this is coming from Yogesh, and he says that, "Can we suggest actions to support customer support agent to resolve the task with low latency?

Gary Class
Industry Strategist for Financial Services, Teradata

Yeah, I'll take that. Yeah, so I did mention the idea of customer task and customer journey analytics, which includes the call center representative. Now, the goal of the bank is to identify what the customer task is, person-to-person payments or fee reversal. And then one of the things that we've found in our experience is that the ability to leverage customer communications, those that were flagged as complaints, and as Pranay, you demonstrated, the summary of complaints, that can be a repository made available in knowledge management systems for the call center agent. So they not only get a synopsis of the complaint, they have that married to all the data from the customer data profile, as well as the customer task that they need to resolve. So you can have a very filtered, focused, flag and summarize and-

Pranay Dave
Director of Product Marketing, Teradata

Thanks very much. In this particular use case, I have as well as you need to find a way sophisticated, but it's also important. So I think the, let's say-

Gary Class
Industry Strategist for Financial Services, Teradata

That has really been my experience, is when the bank's customers communicate, paramount for the bank to be able to text in a form it can be analyzed. So the lesson learned is, it's most valuable if you put it within the context of the customer journey analytics from all the different chat, sort of what the customer's experience or the... is illustrating these complaints.

Balaji Gopalan
Principal Solutions Architect, Amazon Web Services

I think the theme was, like, with generative AI, there's been customers have been like... They've learned the thinking is like that. The key themes data now realizing like, "Hey, it's not an afterthought. Managing data is critical." And this, rather than find a new data source to actually move, they have the data, like, for example, data that they can put to use. But increasingly, what they're also realizing is, like, as Pranay mentioned, in terms of operational, so some of the regulation is still changing. Like, there's still components and aspects around there that still... They want to be a leader there, right? They don't wanna expose some of the data, without accidentally, right? So they're trying to see, like, "Hey, how can I actually experiment with these more to my internal audience or within my business lines before I widen that net," right?

That's kind of a theme I'm seeing as well.

Pranay Dave
Director of Product Marketing, Teradata

Yeah. Thanks, Balaji. I think we have just maybe time for one more question, and this question is: As a business expert, how do I see that which model is fit for a given task, right? So I think, I think this question is great, and I can say that there are currently lots and lots of large language models out there. Now, the advantage of, Bedrock is that they've already curated and preselected important models, so I think that would be good, also a good starting list to start with. And then, as I mentioned, right, we are also, Teradata is also into trusted AI, so we ensure that all the models which we are using, are explainable, and we are able to understand the outputs of the model, right? By business user.

So I think if you go with this, let's say, direction that you would like to implement trusted AI, you want the models to be operational as well as understandable, so which you'd also, you'll start shortlisting the models which is best fit for your business. Now, there are other various criteria also, such as the language, because customer complaint analysis, you know, is important that all the use cases should work in different languages. So you also need to make sure that the model works in different languages. So there are, of course, different criteria which also kind of get annexed to what I just mentioned. So I think, with this, we are already at the top of the time, so I would like to thank everybody for their participation.

And there are other lots of questions which have come in, so we'll try to answer those questions also to you directly. So do not worry, we are going to come back with all the answers, which are to all the questions what you have asked. And with this, I would like to thank you for your very enthusiastic participation, and looking forward to you to connect to the links what we have already shown you, so that you can also try out this demo what we have shown you. You can try it out for yourself. Thank you very much.

Gary Class
Industry Strategist for Financial Services, Teradata

Thank you.

Powered by