RWS Holdings plc (AIM:RWS)
London flag London · Delayed Price · Currency is GBP · Price in GBX
96.20
-4.10 (-4.09%)
May 6, 2026, 4:53 PM GMT
← View all transcripts

Status Update

Oct 10, 2023

Ian El-Mokadem
CEO, RWS

Well, good afternoon, and welcome to you all. I'm Ian, CEO of RWS, and it's a great pleasure to see so many of you here today, and also dialed in online. I should say it's also great to see that you haven't all been replaced by machines yet, something that may be a bit of a theme for the course of this afternoon. I'm gonna start with a bit of an introduction, and start off by outlining some aims for today. We're gonna aim today to unpack our AI and technology story, where we've come from, where we are today, and where we're going. We're gonna explain how AI and technology are critical both to us and to our wider industry. We're gonna showcase the capabilities that we have and the expertise that sits behind it.

We're gonna illustrate how we see AI and technology supporting both growth and efficiency in our business, and we're also gonna outline some of the future developments that we're working on. More than anything, today is an unparalleled opportunity to meet the experts that sit behind the products that you'll see, talk to them about, you know, their thinking on the industry, and understand the products in as much detail as you want to. Hopefully that will help us all to understand how that connects us, and it makes us a net beneficiary of artificial intelligence. In terms of the agenda for today, after my introduction, I'll hand over to Thomas, who leads our software business, Language and Content Technology. He'll do an overview, and then the GMs of our various tech products will take the stage.

Mihai, who leads the Language Weaver team, Matt, who leads the Trados team, and Alex, who leads on content technology. We'll then have the first of two panel Q&A sessions, a short break, where all the technology booths will come alive. Please take the opportunity whilst you're having a cup of tea to go and look at the products and ask any questions you have of those. We'll then come back after the break. We'll shift mode slightly. We'll talk about TrainAI with Vasagi, who heads our data services business. Then we'll move to how we use technology internally in our language delivery platform with Maria Schnell, our Chief Language Officer, who will talk about that. I'll then do a brief summary.

We'll have a second Q&A session, and then if we're all still standing, we'll go and have some drinks and canapés and more tech demonstrations if you haven't managed to get round all of the booths. So I hope it will be a really packed and useful afternoon for everybody. Bit of navigation. You all know us, I think, pretty well. We operate in a large, fragmented market. We have an enviable, long-tenured client list. We have a strong sort of track record of financial stability and growth. And I think particularly today, we're gonna focus on two aspects of this model. We're gonna talk about our unique platform, the technologies in particular, that make us quite unique as a player in our industry, many of which came with the SDL acquisition.

We're also going to talk about how we leverage our global scale and reach, utilizing a lot of the technologies that we will talk you through. I guess one of the key messages, we are a very big user of the technologies that we also sell to our clients, which gives us real credibility in terms of how to apply these tools safely, securely, and in an intelligent way. Those of you familiar with our group structure, again, a bit of navigation. We have four operating divisions. The presenters you see today sit in two of those divisions and on our LXD. Vasagi sits in our Language Services division with TrainAI.

All of our software businesses sit in the Language and Content Technology business, and then Maria heads the shared service for language, the LXD, that supports our services divisions. Everything we talk about today is underpinned by our core purpose, you saw it in the video, unlocking global understanding, and you see our four company values there. And in particular, I think today, I really hope you will see evidence of two of those in particular, really in action. So partnering, a lot of these solutions require us to partner well with our clients, with third parties to make them real. And pioneering. We are very proud that as an organization, we've always been at the forefront of pioneering new ways of working our...

and helping our clients to understand how new technologies can help them, and I think you will see lots of examples of that through the course of the afternoon. So drilling down a little bit into the substance of the day, bit of a recap: We talked quite a lot about technology and AI at our Capital Markets Day in March 2022. We stressed its importance, we welcomed its impact, we confirmed its central place in the group's future, and we highlighted a number of initiatives we were taking at the time, to pursue those aims. Today, we're gonna give you an update, walk you through those opportunities and where we've got to, put the experts in front of you, and give greater clarity on that role that we see for technology in the business.

We recognize there's an awful lot of hype and noise around this topic, right now, and I'm sure you've all been feeling it, across the portfolios that you manage. What we're gonna try and focus us on are the real opportunities that we see in front of us, given our footprint, given our focus, and we hope that will come across. Now, we've said a few things previously. We've said that the macro environment has been challenging. We've said that we face some general and sector-specific headwinds. However, one thing that we really want to get across today is that we have had some challenges, but AI is not one of them. AI and technology is a positive contributor to the results that we are generating, both in terms of growth and in terms of efficiency.

We are not losing clients because of AI, quite the opposite, and I hope we will illustrate why that is as we go through the course of the afternoon. I should say that we will not be talking about current trading today. We have a scheduled trading update on October 25th. So today is very much teach-in, and we'll come back and share our results with you as scheduled, as normal. Bit of a step back on technology in this industry. On this chart, you see the size of the market on the Y-axis, you see time down below. And if we were sat here in the 1980s, we'd have been getting very excited about things like terminology databases and electronic dictionaries, which seemed quite sophisticated at the time.

In the 1990s, we started to see the emergence of computer-aided translation tools, translation management systems, and we've been a player in this space, you know, right since the beginning. We started to see AI emerging as a factor in the industry in the 2000s with the emergence of statistical machine translation, followed some 10 years later with the development of neural machine translation. And again, as we go through the Language Weaver journey, you will see we've been in that process from the very beginning. And here we are today, getting quite excited about the potential that LLMs and generative AI bring to add further to the growth of this market.

Through the course of the afternoon, we will try and explain why we think these technologies have assisted the market to grow and haven't just been a, an efficiency driver, which they are as well. Underpinning all of this is a theme around content, a content explosion that is essentially the raw material upon which everything RWS sits. Now, our AI is a huge topic. Anybody standing here trying to claim they have perfect foresight on this topic would be completely deluded. So our approach to this is to focus on what's in front of us. What are the opportunities that we can see, given the focus of our business, and what are the risks that we can see, given the focus of our business, and being clear that we're approaching both of those.

While we haven't got perfect foresight, we have got some strong convictions about AI in our industry, and here they are. We believe it's essential to play, to adopt the technologies, to be at the forefront of doing that, and to have the capabilities in-house to do at least some of that. We still believe in humans. Their roles have evolved through that timeline that I showed you. They will continue to evolve in our industry, but we still believe that that blend of human and technology is key to success in this industry, and we'll illustrate that to you, especially in Maria's section of the presentation.

We believe that AI will continue to drive efficiencies in cost per word, but that will be more than balanced by the growth in content and the growth in use cases, which we'll illustrate through the course of this afternoon. What's really interesting about timing right now is, just as you're interested in our AI and technology thinking, a lot of our clients are getting asked the same questions. So there's a real moment of opportunity for us as a partner to many of those high-quality enterprise clients to be the people that they talk to, to be alongside their localization teams, their product teams, and help them figure out how the latest technologies can help them do their jobs better. And the other thing we firmly believe is that partnerships will become even more important in the ecosystem that we're now in.

We're very well placed to be a beneficiary. We have the opportunity to be that valued partner with our existing clients. They are, in many of examples, asking us, what we can do to help them. We are highly trusted on privacy and security, something that's absolutely essential and very front of mind in our clients' minds. In the way our clients interact with us today, the amount of questions we get through the questionnaires, the surveys they do before they will place business with us around security and privacy, have never been more intense, and rightly so, given the inherent risks of some of the technologies that we'll talk about this afternoon.

We're already a leading player, both selling these products but also a very experienced user of them, and that gives us credibility and the depth of expertise to put experts in front of our clients who really know what they're talking about. For that reason, we're seen as an attractive partner and from time to time, an attractive acquirer as well. Now, in our strategy launch last year, we talked about five growth drivers, which drive growth in our industry generally. Today, we're really gonna focus on two of those, the explosion of data and content, and there are all sorts of statistics on there. My favorite one at the moment, apparently, 90% of the world's data was generated in the last two years alone. There are lots of statistics around like that. It is huge explosion. We feel it in our lives.

Then no need for me to talk about the growth in AI. We talked about that last year. We've all been reading about it. We've been seeing it. We see it in client budgets. It is an area of growth and investment. The drivers still remain, you know, very true. Also in our strategy, we launched our five-point sort of growth plan, and today we're gonna focus on three of those. The first one, unique technology and AI. We're gonna talk about two of our key product sets, Language Weaver and Trados. We're also gonna talk in the content management space about our Tridion content management platform as well. In our strategy, we said we were gonna invest, both organically and in M&A, in expanding our portfolio.

So today, one of the areas we said we'd invest organically was data annotation, data services, and we'll talk today about TrainAI, the product name that we use there. And we also talked about using M&A from time to time to build out our capabilities. So in the content management space, we will talk about products such as Fonto and Propylon. I should say that John Harrington, the MD of Propylon, the business we bought a few months ago, is actually here today. So if you'd like to talk to John both about his business but also about what it's like to be acquired by RWS, feel free to call on John in one of the breaks. I know he'd be happy to talk to you.

And then last, but by no means least, we talked in our strategy about how essential it is to leverage scale in this industry and what an advantage we have with the language delivery platform that we acquired with SDL, which we've doubled down on over the last two years and is really helping to underpin efficiency and responsiveness and new service development across the business. So Maria Language Experience Delivery platform. now, content takes many forms, and we can handle whatever form it comes in, whether that's text, images, audio, video, and you'll hear us talking about that.

And equally, we're able to support content through its life cycle, helping our clients to create it, whether that's by human means or by technological means, helping them to collect it, to transform it, to analyze, to engage it, to launch it, to manage it, to process the data that's coming back at them from the market. And all of that is with the purpose of helping them to grow their businesses by launching new products, by attracting new customers, by delivering great user experiences, by maintaining regulatory compliance using technology, and by helping them to process, using AI, a lot of the inbound material that comes at them. So at every stage through this content life cycle, technology is critical and is in use day in, day out across RWS. We have a depth of experience.

We have enterprise-grade, well-established products, Trados, Tridion, Language Weaver, commonly known across our industry, leaders in their field, decades of presence, and evolution, and investment. And more recently, TrainAI, which itself, a much newer field, we have been in there with the pioneers, building the skill sets to do that. We have over 40 patents in the area of AI alone and over 100 peer-reviewed papers. In terms of investment, in 2022, about GBP 34 million went into investing in the products that you'll see here today, and I should say that excludes the internal investments we've been making on things like finance and HR and other things. And about 600 colleagues are involved in developing the products that we'll talk about today.

We're seen as a responsible player, very focused, always have been on data privacy and security, and we are a big user. I think, as we said at our mid-year results, some 60% of the words that go through our language platform are processed by a machine first before a human being does any work on them, and Maria will bring that to life, and will also explain the mysteries of MTQE, Machine Translation Quality Estimation, so I won't steal Maria's thunder. Now, a lot of what you see today came from the SDL acquisition. Without that, I have to say, I'd be feeling quite exposed, standing up, talking about AI and technology in this industry. Today we are leveraging through the LXD, Language Weaver, to help us be more efficient.

We are using the technologies we acquired from SDL in ways we actually didn't even envisage at the time we did the deal. So our IP Services patent business, we thought we would probably never use the language platform because the needs there are so specialist. We now see a way to do that. So from October of this year, we've started to make the process of moving work, even those most specialist areas, through our language platform, leveraging the technologies that you'll see today. And of course, there's a great opportunity to grow these products, both, you know, in their own right, by taking them to market and by cross-selling them to our existing clients. So in terms of right to win, we have enterprise-grade products.

We have a really interesting capability for creating data and validating it, leveraging our in-house linguists, our 1,600 in-house linguists, and that, as you'll hear, is an essential ingredient to training AI capabilities effectively and in a trustworthy way. Deep expertise, enviable client set, and we are an attractive partner. So we think very much that we are as well-placed as anybody in this industry to make sense of the technologies that are emerging and make them real, both for us, but most importantly, for our clients. To help orient the rest of the day, our final slide from me before I hand over to Thomas, we can help our clients at every stage in their AI journey.

So our tech services team are able to help clients who are just starting to evaluate different products, different tools that they might use, and how they might integrate them into their own technology stacks. When they start to build AI capability, our TrainAI team have been, for many years, helping clients to develop their own AI platforms for a whole variety of uses. And of course, with Language Weaver, we can give our clients a tailored, trained platform specific to their industry, specific to their own company. We can help them to build and train that in a secure way. And then, as clients start to mature and use AI, we have a product set that we, we will take you through, which can help them at every stage in that content and linguistic value chain.

As I've said, we're also a big user. That's probably enough by way of introduction from me. I'm now very happy to hand over to Thomas to take us into the next section.

Thomas Labarthe
President of Language and Content Technology, RWS

Thank you, Ian, and good afternoon, everybody. My name is Thomas Labarthe. I'm the president of the Language and Content Technology division, our software arm. I am an engineer by trade myself. I've been working in the enterprise software field for the past 20 years, with a particular focus on artificial intelligence across multiple sector. It's great to see that the topic is becoming increasingly relevant. I'll continue from Ian's last slide. As we said, we have already, for quite some time, been supporting our customers through their technology and AI journey, and that takes different forms.

Just to talk a little bit more about, on the left-hand side, the exploring AI, we have a large team of system integrators and software engineers that allow our customers to ultimately develop new types of applications, business applications, and in particular, they've been helping a lot of clients recently around machine learning, developing new concepts, new use cases, prototyping them, testing them so that they can be scaled later on. I think on the building AI, which my colleague, Vasagi, is gonna talk about in more detail, one really important point to note here is as we're helping our own customers to build their AI engines, it also allows us to stay on top of the game.

You know, we work with the most advanced AI companies on the planet, and we can't compete with them in terms of R&D firepower, but by being close partners in helping them build their own AI engines, we are on top of the game, which is a really important point. Focusing on the right-hand side, this is very much the core of our software portfolio with Language Weaver, Trados, and our content management portfolio. So this is very much today, you know, building the AI engine for the group. Obviously, there is a long history in the field of AI with Language Weaver, more than 20 years. And by the way, what is going on right now, you know, ChatGPT and so on, seems new and sometimes scary to many people.

It is just the new iteration or new advance in the field of AI, as we have seen multiple times already. So it is actually a familiar place to be for us, and Mihai is gonna talk about that in a lot more detail. Trados, I think, is a very important element in that it allows to manage the complex workflow and project management process in the localization industry at huge scale. And it's very much the glue between the different software products that we have in our portfolio and our Language Services. And important to note, as Ian said, you know, RWS itself is one of the biggest client of our software. We fly our own jets, and Maria, in particular, uses extensively our software portfolio.

The key point here, which I think forms the special source of RWS, is the fact that by using our own AI solutions, we are also able to generate ourselves data to further train and develop them. And that combination of software expertise, domain expertise, and language expertise to handle the topic of data in a secure way is quite special. So let's have a look at a quick video now. I love to always start from the customers, listen to them, and partner with them in innovation. We're gonna hear from Janet, from FAF, the Financial Accounting Foundation, which is responsible for FASB and GASB in the US, so all the accounting rules for both enterprises as well as government.

They have been a big client of ours, both in the space of content technology as well as language, and a key partner in further exploring what AI can offer to them.

Janet Brody
Associate General Counsel and Director of IT Procurements and Licensing, Financial Accounting Foundation

My name is Janet Brody, and I'm with the Financial Accounting Foundation. We had gone through the solution selection process back in 2019. We had the full implementation in 2021. We ended up implementing the full Tridion Suite, which was Sites, DXD and Docs. We felt confident that they could be a trusted partner. We also felt that the system itself was sophisticated enough to really accommodate our complex content needs, but at the same time, it presented as scalable and modular. Our publishing platform is probably our most critical technology system. So looking at it that way, we knew there was zero margin of error to get this right and get our content right, and a system that can support that. Tridion kind of seemed to check a lot of those critical, high-priority boxes.

It supported our front-end websites to help us really meaningfully engage our end users, again, where the content itself is so critical, and the integrity of that content is critical. To that point, we also believe that it could handle our... what is highly structured and complex content. We manage what's kind of referred to as monolith content. That was a huge consideration in selecting a system that can process that magnitude of content and can render it properly, and through multiple distribution channels. From a practical perspective, RWS is a partner in problem-solving for us. We know going into our work with them that their resources have a high level of expertise with a deep understanding of the technology. On top of that, they seem to understand how this all connects into the content space.

And at the end of the day, that's the key linchpin, which is connecting the technology to what we're actually trying to do as a business. The integrity of our content is so critical, and frankly, it can be a challenging space. Solutions that support content and language are, I mean, complex, and they're evolving. So we really maintain good touch points with RWS to keep pace, and it feels like they have a good handle on how those needs are changing. Like most companies today, artificial intelligence and LLMs, as a subset of AI, they're firmly on our radar. We're looking at the risks but also the opportunities. RWS, as a provider of our content management system and of our websites, they will always need to be a strong factor in how we may be looking at those risks and those opportunities.

Thomas Labarthe
President of Language and Content Technology, RWS

All right, what I like about this testimonial, you can hear how content and language are ultimately two sides of the same coin. You know, we heard from Ian earlier how the different technology waves have benefited the localization industry in terms of productivity, quality, and scale. There is this other very important dimension, which is technology enabling to handle more types of content, and from, you know, earlier on, monolithic documentations to web and social content, and more recently, multimedia content, with a lot of videos and audios. Importantly, the fact that enterprise content is increasingly consumed not just by humans, but also by machines. Here again, the fact that RWS combines the software expertise, domain expertise, and language expertise is a key asset in this race.

So let's start diving in more into AI now, and, I'd like to first do a quick clarification around terminology because it can be sometimes confusing. So at a broad level, artificial intelligence is any computer program that mimics the human intelligence, can sense, it can reason, it can make decisions, and adapt to context. Machine learning is a subset of that, whereas the more a computer system is exposed to data, the more its performance is gonna increase as it derives more logic from the data. One step further, deep learning has basically enabled to build more complex system by layering what we call different neural networks and achieve things such as computer vision, as an example.

Very recently, generative AI, just over the past three years, you know, has been building systems that can learn from existing artifacts and ultimately use these to generate brand-new ones. More concretely, you've also seen this with ChatGPT in the field of large language model. We basically take vast amounts of text to train these models that can then very much generate human-like new outputs. Sometimes, we get surprised by actually the creativity and the novelty of outputs that these systems can come up with. It's not just regurgitating what they have learned as an input. So this is very much, you know, the large language model, the area in which we are playing and we are focusing as RWS.

So the big question is obviously, you know, I always get that: How can you compete with, you know, the Microsoft and Google and Amazon, et cetera, in this space? And the reality is, we don't. We don't compete. They are actually partners of ours. And important here to understand the key ingredients that are needed to build a large language model. You need infrastructure, you know, that is compute power, definitely not our business. We work with Microsoft, Amazon, and others in that field. Then you need an AI model that will sit on top of that. You know, ChatGPT from OpenAI is the very well known of these, but as you go further down on this chart, you get to the very exciting area of open source large language models. And this is very much where we are focusing.

There are high-quality, open-source LLMs that are available to researchers right now that we can take and tailor in very much a customized way, in which we can also control topics such as, you know, data privacy and security, which is highly important to our customers and to our business. And the final ingredient is obviously data itself. And as we said, and again, you know, you will hear a lot more from Sagi and Maria on that, we have not only the experience of handling large language projects throughout the years, but we can also generate our own data in a very specific way to build those LLMs, and that really what makes us special. Final point from me before I hand over to Mihai.

Every day there's a new cool use case and demonstration of large language models. It's very exciting, but, doing a demo is one thing, scaling an application to the sort of enterprise-grade expectation of our clients is another. And for this, you need to combine four key elements: linguistic features, obviously, but importantly, the possibility to adapt and control those features. Security, absolutely crucial, and it plays to our strength and our long history in regulated industries. And finally, the cost of performance, because you need to be able to operate those systems at huge scale in a cost-effective manner. This is very much what we've been doing over the past years with Language Weaver, and we're leveraging this experience across our entire portfolio right now. So let us play a quick video, and I'll hand over to Mihai. Thank you.

Mihai Vlad
General Manager of Language Weaver, RWS

Good afternoon, everyone. It's lovely to see so many familiar faces and new faces. I never quite thought that we could pull this off, make security and AI look cool. Now, beyond looking cool, they are incredibly strategic directions for what RWS and Language Weaver does. A few words about myself. So, I'm Mihai Vlad. I lead Language Weaver. This is ultimately the kernel of AI innovation for RWS, and I've been sharing this journey for the past seven years with a team of incredibly gifted researchers and linguists and engineers to ultimately build products and software. In the similar vein to how maybe Tesla builds their autopilot, they use data. They do not necessarily use specific algorithms to achieve that objective. We do not use the images generated by eight cameras around the car. We use language, and the purpose is...

Our objective with Language Weaver is ultimately to build the autopilot, but for the translation industry. The journey started 20 years ago at the University of Southern California in Los Angeles with this team of researchers that started to build this software in this particular way. If you haven't experienced this so far, if you're not into engineering or software, it is something magical to just inject an algorithm with some data, and then you see it perform and mimic all those behaviors that are to be seen in this data. We thought that the journey was plateauing, only to have another revolution in AI with generative AI and large language models.

But before building the autopilot, we need to sell some products, and I'm gonna be talking to you about how we are achieving that. So for the session today, we'll dive quickly into the product and the use cases. We'll talk very quickly about how they serve the business needs of our customers, what makes us unique into the market, developing more on what Thomas has described and what you've seen in this video. So security, adaptability, enterprise-grade technology. I'm pretty sure I'll be kept honest by some of the familiar faces I've seen since the Capital Markets Day, and talk about the progress, and then dive right into the way large language models are gonna revolutionize our industry. So the first thing is, let's look into the three use cases for Language Weaver.

The best way to imagine this is, imagine you're at your desk, you might be working for an insurance company. You might be receiving a memo. It's a French insurance company. Receive the memo in English, and you want to understand what that memo is about. What you might be doing is take that document, run it through a free translation tool, understand what it's about, go about your day. Three mistakes with this, you might have exposed client confidential data, you don't necessarily know about the security or the quality of that translation, and thirdly, you might be in breach of the data privacy policies of your regulated institution. We have the alternative to that, and this is one of the use cases for Language Weaver, and I'll show it to you live. This is the very insurance memo.

Should you want to translate it in a different language, you might be accessing a tool like this. It's incredibly easy to use. Human beings, not experts, can do this, and the translation is supposed to happen in a matter of seconds. What's happening behind the scenes is this complex document with formatting, with headers, with images, with a lot of structure, ultimately gets converted into this document here. You can see, well, they kind of look similar, but the difference, the fundamental difference is that the document on the right is in French, the left one is in English. So you might be thinking, "Well, this is magic," and there's a lot of AI that got us to this point to maintain the fluidity and the formality and the accuracy of the translation.

While you might be thinking, "Okay, what's so special about this? We might have seen this somewhere else." Well, what's different, that we are doing this at scale, we are doing this securely, we are doing this with the approval of the IT department, and we are doing this in a way where we maintain the formality and the nuance of the language of every organization that deploys this. This is just one of the capabilities that Language Weaver has. If you're curious about the other document transformations we've got, Arno here, my colleague, will show you into the break. That was the first use case. The second use case is hard to describe because Language Weaver is hidden within systems usually used by public sector organizations and regulated industries, sometimes financial institutions and law firms.

What we do is we ultimately convert large volumes of content that need information to be extracted from, from the target language into English or into the language that the analysts or the lawyers or the paralegals can read and understand to ultimately make these decisions. The importance is speed, throughput, elasticity of the traffic, and we deliver all that to our customers. As easy it is to use, it is so easy to integrate, and the developers love it because they do not spend days and days just tinkering with the APIs. It's, it's natural to integrate within the IT stack. The last use case, which is bringing us considerable amounts of revenue, is making localization faster. What do we mean by that? Maria and my colleague, Matt, and Tracey will be talking about this.

We ultimately pass the majority of the content that RWS and some of our customers are to translate initially through Language Weaver, and the output is ultimately then being reconfigured or adapted or ultimately what we call post-edited by professional linguists in order to deliver that cultural match with what the customers need, or to really ultimately polish the translation, make sure that it's accurate, and it really matches the audience. The quality of these translations is getting better and better, but it is not enough to ultimately drive a solid business. We've expanded not only by building a better and better product, but we've expanded our business into looking to the adjacent market. We started with the government sector. This is where Language Weaver was born 20 years ago.

We looked at all the other adjacent sectors, like regulated industries, life sciences, financial institutions, law firms, and build a product in such a way that it can be deployed either in incredibly secure locations or in a secure cloud deployment, where we have the elasticity and the flexibility that these customers need. What all these customers need are actually three things. First of all, as you've seen, they need a software or a piece of software that uses the cutting-edge AI and translates fast, accurately, and it is ultimately matching the style of, of the content of the customer. The second thing is they need an enterprise-grade software. They need to rely on this.

There are important processes that run 24/7, that are critical for businesses, sometimes converting RNSs in the morning when you receive them from, you know, from the Asian markets to make immediate investment decisions. Those have to be made by initially converting the language into English so that investment decisions could be made. So this is an example of such a process. But most importantly, and what's unique about Language Weaver, is we do not believe that there is such a thing as a perfect AI technology or a perfect machine translation technology. We believe that there are perfect technologies for every customer that we are working with. It's ultimately tailoring the solution to the specific needs of that customer. So you might be thinking: Okay, so what do we mean by that?

But the reality is that if you were to work in the public sector, the same word might mean different things depending on where you position it into the sentences or depending on what the audience is. So the machines can't necessarily figure out which meaning of the word you're looking for. As such, you might be looking at the translation, you might be thinking, "Oh, that's bad. That's not good." No, in reality, you need to adapt the technology to suit that particular style. And if you were to make a parallel with self-driving cars, maybe the model, the autopilot designed to work on a racetrack, might not be as efficient as putting the same car on a dirt road.

So we have to work with this technology, and it is that difference that helps us win business by ultimately taking every business problem and building the right solution for that. A good example is the way Language Weaver was deployed into a Fortune 500 organization, where it wasn't deployed just once, but it was deployed multiple times throughout the IT stack of the organization. We introduced portals, we introduced components that were able to transcode content from one step of the process to another. And the result for this customer was that they not only reduced the cost of processing claims, we're talking about a multinational organization, insurance organizations. They were able to obviously reduce the cost of not fumbling about with all the documents, but also the speed increased.

And for them, speed was actually more important to process the inbound claims faster than it was to reduce cost. This is just one of the examples where Language Weaver has helped an enterprise customer reduce cost and improve speed. So in essence, like, what is our right to win? Thomas indeed mentioned that we partner with AI providers, but we also win in the market, in the machine translation market, in the enterprise machine translation market, because we have built the right solution, the right product, not just a particular model that, you know, it's a Python script and win some kind of a student competition. No, it's a rugged product that you can really trust to thousands of employees within an organization.

And then, as we'll hear from Vasagi, we are able to spin the data flywheel and produce data to ultimately drive a much better algorithm. And this is... Coming back to Tesla, by the way, this is not an investment advice at all, but it is beautiful the way they engineered AI into their production system, and pretty much we're following the same blueprint. They realized that it is not enough to have the best possible GPU and the best possible algorithm to detect traffic lights or objects. What was important is to derive or build more data, in their case, 3D images of the particular environment that the car traverses through, to deliver a better result, and we believe that as well.

We believe that data is more important or equally important to the quality of the research that goes into these algorithms. So to do that, how do we spin this flywheel? We spin this flywheel by bringing the experts in the loop. Who are the experts? The linguists that Maria will be talking about and the research engineers. This has a strategic importance for our business because it makes the product, and it makes the business stickier with our customers. They cannot replace something that derives really good quality of translation for the type of content that they've got. So how do we achieve this?

We drive better quality by upselling linguistic services to complete that last mile, and by last mile, I mean, we pre-translate the content, and in order to achieve perfection, we bring upsell linguistic services to complete that loop, and that data is being used by the customer to improve the quality of the very engine, making the flywheel complete. And we do not stop here. We actually take this very data, this... and in this case, bilingual data, English, French, English, French, that was revisited by the customer, and then we built a better adaptation so that the model is better tailored or suited to the type of content that we translate. And then we don't stop here.

The last element in the loop is that we might not have access to linguists or professional translators, but the paralegals and the subject matter experts within pharmacovigilance or within e-Discovery, they asked us to build a technology where they can provide feedback so that the machine doesn't make the same mistake, and they just want to interact with it and say, "Hey, please don't make this mistake. You should be using this word." So we introduced that directly into Language Weaver, so that Language Weaver learns immediately from the feedback provided by these customers. So it is a beautiful technology flywheel, but ultimately, by deriving stickiness, it ultimately drives more business. And what we want to do with the money that we get from these improved renewals, is invest into developing a larger partner network.

So diving quickly into the progress since the Capital Markets Day, we have obsessed over improving the quality of our generic models, and just last year, we've improved them, I think, on average, at approximately 14%-15%. So improve the quality that ultimately is a net benefit for RWS as a group and for our customers. And then we took machine translation and AI technology further, and we said: What can we do more to help professional translators reduce the effort that they expand on deriving those perfect translations? So what we did is, obviously, we've gone through improving the terminology. We built adaptable models, so models that are suited for our customers and for specific domains. And one research area that right now is going into production is Machine Translation Quality Estimation.

Again, if we were to make a parallel with the autopilot, with Tesla, how many people have used the FSD or the autopilot in a self-driving car? Not many people taking risks, I can see. So right now, it's actually a very risky endeavor, but the way that it's being deployed, it is being deployed as a safety feature, as an alert. So ultimately, you might be alerted on, I don't know, approaching a different vehicle or approaching a roundabout, and this is all AI helping the driver be more alert or efficient. So we thought, okay, on the journey to building this autopilot for translations, how can we use different AI technologies to make the translator more efficient? So here comes Machine Translation Quality Estimation.

It is ultimately an X-ray function that you can see here, where you get almost like a statistic set on what professional linguists from Maria's organization will be seeing when they have to publish the output from that document. They will probably accept 60%-70% of the sentences, and then look to improve maybe a smaller portion of that, and some of them, 5%, need heavy work. Now, this is not an assessment of how good machine translation is in general with other models, but the amount of effort that would be required by professional linguists to get us to perfect translation.

So this is incredibly important because it gives us a really good view into, obviously, the effort and the pricing we should go with, but it also helps translators look, you know, where the boundaries of the road is and where the errors might appear. And what you're seeing here, you will be seeing it in the demo area, is ultimately a scoring or a prediction of effort against every sentence within the document that was translated. And this is magical because it takes us beyond machine translation, and it helps us bridge that gap from what the machines can do and ultimately improve the... or ultimately reduce the effort of repeating the or building a technology that ultimately reduces the same errors over and over. And Maria will be talking more about this.

So when it comes to large language models, we are super excited about this. You've seen maybe one incarnation of one single use case, and how we want to deploy this throughout RWS. Well, this is just the beginning. The reality is that this technology, as Thomas mentioned, has evolved from smaller models that were able to handle one single natural language processing task, in this case, machine translation, and it started in 2016 with the Transformers. Then it led to multipurpose large, what were called then large language models, and in the end, we are using large language models in production right now.

So you might be thinking: Will this technology, will these larger models that are able to perform multiple tasks like summarization or sentiment analysis, and maybe even a modicum of translation, will they be good enough to replace the core technology that was built so far? And the answer is no, or not yet. So then the next question is, how exactly are we going to be using this technology? So you'll be seeing in the demo area, but these large language models for the task of translation, they're incredibly useful to filter out the input or eliminate the noise from the content that gets to be translated, and ultimately deliver a better translation just by this Dolby filtering function before the translation actually happens.

They are also really good at modulating the output and deriving different formality levels or different styles for the same accurate translation. So we want to be using them in combination with the technology stack that we have built so far. So what does the future hold? The future is for Language Weaver introducing and further productizing large language models, and then allowing, allowing us, and you'll see it in the demo area, to expand beyond machine translation and take these technologies and expand the value of Language Weaver, not to just make a better translation, but expand beyond machine translation. And we are lucky because so many employees and so many organizations are using Language Weaver. They are already connecting day by day, every day to translate documents.

The next step is to summarize the documents or convert them into different styles, or start asking questions across these data sets. So we, we are in an incredibly lucky position to do this. A few takeaways, as you've seen, super excited about what the LLMs are offering us, and we are definitely on the wave... on the AI wave. We have built a strong enterprise product with an incredibly precise position into the market by building on the security element and on the adaptation element. We are lucky to work with RWS, not only to generate the data, but also help produce those efficiencies and help with the speed of processing documents. And last but not least, we're excited about expanding the horizons of Language Weaver beyond translation and tapping into those document transformations.

With that, I'm going to hand over to Matt and Tracey to bring us back to what is truly important for RWS, that last mile of getting those perfect translations done. Thank you.

Matt Hardy
SVP of Products for Linguistic AI, RWS

Thank you, Mihai. Hi, everybody. Good afternoon. My name is Matt Hardy. I look after our translation, language technology products. I have started my career in technology. I was an engineer and had a background in consulting. I came to SDL, which was later acquired by RWS, came to SDL 20 years ago, and over those two decades, I've held many roles within our language technology from deploying our technology to consulting. What I do now is, I'm responsible for our product portfolio there. I'm joined today by Tracey, who I'll give a couple of seconds to say hello.

Tracey Byrne
Solutions Consultant, RWS

Great. Thank you. Good afternoon, everyone. My name is Tracey Byrne. I'm a solutions consultant with RWS. I've worked for the company for 23 years, and I've been in the localization industry for 26 years. I started out my career working on the services side as a localization engineer, and I started working with Trados in 2000, so subsequently with SDL and now with RWS. So during that time, I've had the pleasure of working as an engineer, working in training and support, and now consulting on our language technology.

Matt Hardy
SVP of Products for Linguistic AI, RWS

Thanks, Tracey. Tracey will come back in a little while to help us to explore the product itself, make it come to life. So we're gonna follow the same sort of structure that Mihai did. We're going to have a look at the product and who's buying it and what needs they are solving. We're going to look at our progress, and we're gonna take a look into the future of our products as well. So let's start off and introduce the three customer segments that we address. The first is corporate. We have 1,700 corporate customers, and these are the folks who have the content that need to go into the localization process. So these are organizations with websites, with user documentation, with videos, with software.

Everywhere that there's content that you need to get out to a global audience, it needs to be localized. LSPs or language service providers, that's what RWS is. And we sell to 350+ LSPs in the industry. And these are anything from our customers, the largest LSPs in the world. But it's a very fragmented part of the industry as well, and so we have products suitable for wherever you are on that scale of LSPs. And finally, linguists. A massive part of what we do is to provide the working environment for linguists around the world. Now, there isn't such a thing as a cookie-cutter translator.

You can have a professional translator who's been to university for four years to learn how to be professional and use our tools. You've got casual translators, you've got bilinguals who do reviews. All of these folks have a home in our products here. If we look then at how we address those markets, the first is Studio. Studio is that product that supports the localization industry. It does everything that a translator needs. It picks up the doc... the content, it extracts the work, all of the things that need to be translated. It goes and finds from various different assets how it can reuse content that has been created before, so you have to pay for it to be done again.

It checks your terminology, it links out to machine translation, does all these things, and 100 more things. It's where translators live. So, Ian likes to draw an analogy between accountants and Excel, translators and Studio. It's the same sort of day-to-day, spend your life there, and it's got everything that you need. If we move through, if you get a handful of translators together, you add some project management, you've got yourself an LSP, and that can then grow to the size of an RWS. But as I say, it's a very fragmented part of the industry, and we have, or these guys have the need to collaborate. That's the first need you have as an LSP.

So you want to be able to share these assets which are bringing you value and helping you to create more efficient translations. And you do that with Team from Trados. And then if we go right to the end, we have Accelerate and Enterprise, which are essentially two different scales of the same offering. And what we're doing now is we're adding workflow, we're adding process management. We're allowing corporates and LSPs to grow and scale the work that they're doing. Okay. Most all of these tools we sell as technology only. We have all of these numbers are for our technology customers. We exclusively sell Studio and Team as technology only. We sell our enterprise products, technology only, but we also bundle with our services.

We provide tech-enabled services, where we are using behind the services, all of our own tools to deliver efficient and high-quality translations. So this is our Trados ecosystem. You can see at the top those three buyers we had, the individual translators, the LSPs, and the commercial enterprises. Trados is a cloud product, so everybody gets the same thing, where you get access, and you get different features available. So all of these parts are common, whatever type of buyer that you are. So over here, we've got all of the features and capabilities you need to be efficient, to manage projects, translation projects in an efficient way, because all of this is about automation and scale. That's what we're delivering. So your project management features, your workflow, and your reporting.

In Trados, we're looking at how we... and this will become clear in Tracey's demo in a second, how we deliver the efficiency, and it's the translation memory that, those spots where we collect all of the previous translations so that they can be reused. It's agreeing terminology for branding or for industry terms with individual customers, so as they get translated, we enforce those terms in the translation, so we're driving consistency. Consistency means quality and reuse for customers. Trados is built in a, on a modern platform. It's built from APIs, and so provides APIs. By having those abilities to connect, we develop an extensive ecosystem around here. From Trados, we can connect to any large language model now and get the benefit.

Could be, as Thomas mentioned, open source models, could be huge enterprise models, could be customers bring your own models. Our APIs let us make those decisions and support customers with their choices. Connecting to Language Weaver, so that, we'll see later how much use LXD with Maria makes, makes of, machine translation and, and Language Weaver. This is how that comes, to be automated into the environment of the translators. We have a huge ecosystem of developers and partners who are out there using our API to build custom solutions, partner solutions, to extend Trados. Finally, in the bottom right there, we have our translation interfaces. These are, we get-- we have a, an online/offline hybrid, scenario. If you think about how, Microsoft allows you...

Office allows you to work on your desktop or online and then switch between the two, this is really important for us because at our scale, as a product professional, I think we have a, you know, a responsibility to make sure that we're covering as many translators as we possibly can. As you'll hear from Maria, the idea of long-term and midterm languages, you know, the translators that are in these emerging, you know, regions might not have the luxury of the connectivity that we expect here. Having online/offline switching capabilities is really helping us to bring along a whole new ecosystem of linguists. Time for a live demo. What we're going to do, we looked at the—or Mihai took us through the work with Language Weaver.

So we had an EV insurance document that went through instantly with Language Weaver. So what we're going to look at is, let's say that was a document, we're going to use the same document, but let's say it was a higher value document for the customer. So it is more impactful to the customer's business, it has a longer lifespan, or it has more eyeballs focused on it. Something will tell you whether it has more value. And when it does, you need to bring humans in, and then we merge that process with machine translation and human to bring it up to the quality output that's necessary. So I'm going to hand over to Tracey to walk us through that process.

Tracey Byrne
Solutions Consultant, RWS

Great. Thank you, Matt. So I'm going to use the short demo today to highlight some of the key cost and time-saving features that are available to content owners, to project managers, and to translators. So Trados Cloud Portfolio is... It helps to automate and streamline the translation process, and that delivers the best blend of speed, of cost, of quality, no matter what content type our customers are working with. Trados as a platform delivers security, it delivers tailored and focused features for all users throughout the translation supply chain. The centralized views and access allows content owners, allows project managers to have real-time access to their projects to ensure that translation tasks stay on track for an on-time delivery. Now, as Matt said, we're working with the same sample file that you saw Mihai demonstrating in his presentation.

So of course, you're seeing here quite small, costs, small volume, but in essence, Trados is used to receiving large volumes of content or indeed multiple deliveries over the course of, any point in time. So to submit a request, it's a very simple process. You simply select the content and then the level of service that is required. And in essence, in selecting the level of service, this will route the content through the correct process, selecting the appropriate resources and assets to get the job done. Trados supports, multiple file types, from documentation to web, from software to multimedia. And our content connectors, aim to streamline that process in automating the delivery of content from a content management system. So eliminating the manual and the timely processes in exporting content and delivering it, for example, via email.

Those content connectors are commercial content connectors in the industry, and of course, a tight integration with our own Tridion content management system, which Alex will be talking about. So once the content has been received, it's a fully automated process to prepare the files for translation. Trados will extract the text to be translated, protecting the original structure and layout of the document. Immediately, the costs or the quote for the work to be performed will be delivered to the content owner, and this allows them to see straight away what is the cost of translation, as well as the savings that they are achieving by having Trados implemented. Through Trados automation, the translator receives everything that they need to begin translating straight away.

The automated file preparation through Trados will pre-translate the file using a database that contains approved translations, and we call this a translation memory. So anything that has been translated before will be marked as translated, will be locked away, so the translator doesn't need to work on that content again, and that helps to reduce the cost and the time that is required. Similar matches or similar sentences will receive a suggestion from those approved translations in the translation memory, and that suggestion is then available to the translator, so they can adapt that translation for the context or the nuance of the project that they're working on.

And then, of course, for content that really hasn't been translated before, doesn't have any level of matching from previous translations, we have that tight integration, one of the use cases that Mihai spoke about, by having Language Weaver integrated directly with the Trados portfolio. And that means that we automatically receive those high quality, those neural machine translations coming through, in addition to the matches that are coming from the approved translation memory. So in this way, the translator receives a fully pre-translated file, which contains approved and non-approved translations. That helps to accelerate the translation process, and it also allows the translator to devote their time, their expertise to adapting the translations that are not yet approved, adapting them for the right context, adapting them for the nuance, and that in itself will increase the quality of the final translations.

Now, in addition to that, we have our centralized term base, and the term base will highlight any terms that are identified in the source, and it will automatically display them for the translator, along with the approved translations in each of the target languages. So this is really important for our customers as they manage their brand, their global messaging, the consistency of their content across different platforms, in ensuring that terms are used correctly and translated correctly across their content. The real-time preview that is available to translators allows them to see the translated content as it will appear in the final document. And if you think back to Mihai showing that in his presentation, you can see, again, the structure and the layout of the content is fully supported.

So this is important to reduce the human effort that may be required in typesetting or performing desktop publishing after the content has been translated. So once the project is marked as completed and the translations have been delivered, the content owner will receive an email notification. Our content connectors continue to drive automation and time efficiencies by delivering the translated content directly back to the content management system as soon as the project has been marked as complete. And here you can see the original source that we started with. The German translation in this instance is sitting beside it, and again, seeing that that overall structure and layout of the document is fully supported as part of that process. Back to you, Matt.

Matt Hardy
SVP of Products for Linguistic AI, RWS

Brilliant. Thank you, Tracey. What I hope you see from that is that Mihai showed a way of moving from a source to a target, a source to a translated document using just machine translation, and there's benefits of cost and time which are associated with that. Tracey has shown for the same document how you get the same, you know, result, a translated piece, but by bringing an additional quality and a new output level by merging humans into that process. That's because content is different, okay? Some content is applicable to one way, other content is applicable to another, other content needs even less automation and machine-first work, and is more human-based.

It's all about understanding the value in the output of the content so that you get the right process. So you've seen what we're doing, but why are we doing it? So what are customers getting from this? So the first thing is by having a workflow which is standardized from end to end, we reduce the friction. We reduce the friction of content coming into and through and out of the supply chain. And by adding the automation, which you see in the manual elements from Tracey, but you add that all into a workflow that's automated, you're driving efficiency and then ultimately cost and quality impacts from that by reusing it and using the guardrails that you get with the Tridion process. And we have end-to-end security. Security is built in and designed in to Tridion.

Front of mind as we go into this last year and next year, as we have started, and as we continue to research and develop LLM solutions, is that security and privacy of customer data is absolutely paramount, and that we don't... as we add these features in to Tridion, we're not losing control of data, we're not losing any of that security and privacy. That's absolutely part of our DNA in Tridion. There's a value for all roles. What we have in Tridion is an infinitely flexible platform, which means that you can, it encompasses any role in the supply chain, and it allows you to deal with any type of content that needs to be localized. And third, easy integration of all assets.

Tracey and I have both mentioned these, the translation memory, the terminology, machine translation, LLMs. These are all things which become part of just a translation engine. We don't think of these things as being independent. Should we use machine translation? When should we use LLMs? You know, these are blended together so that the right content and the right sentences within that content all can be treated differently, depending on, depending on the need. And so what we build with Trados is a backbone of the localization industry, which spans the entire supply chain. So let's have a look at this great case study. Company LearnUpon, they are a provider of e-learning platform.

The reason it's a great scenario to look at is that it's very representative of the problems that we solve for customers every day, every week, for thousands and thousands of use cases within our customer base. They needed to localize their user interface into nine new languages in three months. Now, if you just sent all of your user interface for humans to translate, you'd run out of humans very quickly, you'd run out of budget, and you definitely wouldn't get it done in three months. You have to combine all of the technology to allow you to do that. That's exactly what we did, and it's exactly what we do with our customers.

We have RWS translation services providing the humans, and we have Trados Enterprise providing the reuse of previous translations, providing the instant quoting, the automated routing, the terminology, the machine translation, et cetera, and we use the workflow to automate all of that together, which ultimately gets us to these great results that we duplicate for our customers every day. So quotes were now provided in minutes, so Tracey showed how they just appear, and automatically as you create the jobs. Improved translation quality means that the customer complaints came right down. We were looking at 50% faster time to market for those new languages. And anybody going to market knows, you know, 50% reduction in the time to get there means real, real impact to your business. And the automation reduced risk of human error.

The results were exactly what the customer was looking for. Nine languages translated in three months to help them to reach new markets. The right to win for Trados. We are a highly established player, so the Trados brand has 30 years of recognition and a clear suite of solutions which sit on that single cloud platform. It's efficient and has scalable development. As a cloud piece of technology, we embrace all the benefits of cloud, which means that, for example, we're releasing up to 10 x a day, thousands of times a year, which means that the latest features, the latest fixes, are always instantly with our customers, whether they are an individual linguist, an LSP or the largest corporations in the world. We have a very high level of stickiness.

The beauty of this, which is win-win, is that the more customers use the technology, the richer these assets get of the terminology and the translation memory for reuse. So the richer those assets, the more reuse, the higher the quality and consistency, and the lower the cost to do translations. So that means there's a huge stickiness that we get from reuse of the product. The more you use it, the better its output. Our progress over the last 12-18 months, I've split into three sections. So the first is about maintaining our market leadership. We saw our customer numbers on the first slide, and to me, it's about retaining and enhancing a rich feature set. So we have...

Based on that breadth of our customer base, we have a excellent understanding of what the industry and the different players need. We have LXD with us under the same corpus umbrella, and it makes it really easy to make sure that we're keeping up-to-date with exactly what the market needs in real time. We spent a lot of time this last year working on Tridion and Trados, bringing those much, much closer together, making it feel friction-free as you move content between the two. I'm really proud of what we have there. You'll see Alex speaking next. An improved go-to-market. We have a brand new Trados.com website released last week, and we've been very clear about delivering new, highly competitive packages, which meet the needs of all of those different buyers. Embracing the emerging AI technologies.

So this is a massive part of my day, and all of my team's day, how we get the research moving and how we find the right use cases and the right solutions to take advantage of, the new AI opportunities. So the three main areas we look at is about, apps for our Studio, our translator interface, to allow them to be more, more efficient by making suggestions of, of translations, alternate translations, helping them to adapt tone, and adapt to terminology and, and that sort of thing within the translations. To do something similar, but workflowed in enterprise as a slightly different, set of, requirements and, and benefits to be got, to be got there.

A really cool one, I think, is the AI copilot for Tridion, which we released last week, the first iteration of, which allows you in Tridion to simply chat to a Tridion chatbot. Not the sort of chatbots that, you know, we've always been used to. It's sort of the if this, then that type, navigation tree, but something which genuinely understands the question and can go and surface that from all of our documentation, our knowledge bases, our community, posts, everything, or become a resource that the AI can pick up. That's, you know, already helping with our customers adopt the technology and get the most out of it as they use it every day. Final section is dealing with, technical transition.

We are successfully migrating to the Trados platform from some of our other technologies, bringing our customers along onto the latest and greatest. 105 to date, which is a great start, and no adverse impact on year-to-year attrition. The future, Trados and AI. We're really looking at, obviously, how we now extend AI, and for Trados, I see that in two main areas. One is generative translation engines, how we can create and improve and adapt translations using LLM, and the Trados Copilot. Moving from surfacing just information and answers to interacting with the database, doing Tech-t o- SQL and automated reporting, and then ultimately to being able to actually control Trados through the chat interface, which will be really cool. We've got some amazing proof of concepts there. Thought leadership.

Really, we have a passion about going and talking to customers in the market about what we're doing. We have 15 accepted conference presentations, which we'll be going through this quarter, and we're launching a subscription platform to allow customers to engage with us and purchase the software in a self-service manner. Beyond that, we're looking to expand into machine-first video localization. We'll talk a little bit about what it means to localize video, and as you see that in terms of the human process, Maria will talk about the huge opportunities for different types of video to be processed machine-first or machine-only as well. We're partnering to add software localization capabilities, but obviously adding further AI features and continuing that transition journey to bring all of our customers onto the latest Trados.

Very nearly finished. I'm gonna do a quick 60-second video. This is-- enjoy this. It's sort of what we have already released in terms of snapshots of, linguistic AI features, and a couple of hints about what's coming next. There's a lot going on there. That's something I'm really excited and proud that we've been able to achieve this year. An awful lot of features there. It was a bit of a whistle-stop, also, sort of, flashed up there. Myself, Tracey, Chetan will be outside during the breaks. If you'd like to see some more, talk some more about that, I'd be delighted to do that. I have some key takeaways, which I'll leave you with. The first is that we are successfully migrating clients towards our latest technology platform. We are maintaining our market leadership.

We're looking at a clear roadmap to incorporate LLMs. We see this as a massive opportunity and something that we're really in the right position to continue to work with and grasp that opportunity. And finally, Trados retains a pivotal role in localization, in an AI-driven world, where we are continuing to bring human intelligence and artificial intelligence together. Thank you very much. I will hand you over to Alex.

Alex Abey
General Manager for Tridion and Fonto, RWS

All right. Hi, everyone, and thanks for spending your afternoon with us. I'm Alex Abey. I'm the General Manager of the Content Technology Business Unit here at RWS. Been with RWS for about five years, previously with SDL, and in the current role of General Manager for about 18 months now. A bit about me, I live in the San Francisco Bay Area. I have worked in enterprise software for a long time, more than two decades, three decades, and in the SaaS space for quite a while now, too. I have a good amount of expertise running and scaling SaaS businesses. Let me give you a quick session overview here. What we want to cover today is a bit about the category and portfolio that we have in content technology.

I then want to take it from the abstract and try to make it concrete by getting into an in-depth case study that I think everyone will resonate with. Then zoom back out and talk a little bit about the market and give a market perspective. And then we'll end with a quick video showing where we're headed with AI in this class of product. So let me start orienting you in terms of the products that make up the portfolio we have for content technology. There's four of them. Two of them may be more familiar than the others. Tridion and Contenta came over in the SDL acquisition and have been in the portfolio for quite a while. I see, Tim, I'm going to point you out.

Tim Russell-Jones, who runs the Contenta business, is here in the audience as well, so please find him. The two more recent additions are Fonto, who we acquired about 18 months ago, and then Propylon, who joined about three months ago, so the newest members of the family. I know Ian pointed out John, but John Harrington is in the audience as well. One thing I'll just say, because it might have sort of slipped by when we showed the customer testimonial, is that that was Janet Brodie, who's one of my favorite customers, and she has deployed Tridion very heavily at the Financial Accounting Foundation. Just to sort of tie that back in for you a bit. I'm really going to kind of start at the start here and define what we mean by content technology.

Content, in particular, means a lot of things. It could be YouTube, Netflix, Disney, a lot of different things. That's not what we mean. What we mean is business-critical information that's produced by organizations. So that's our definition of content. And then technology, very simple, it's enterprise software. So that's. We are an enterprise software business unit. Now, software companies usually exist in a category of some sort, think ERP or CRM or something like that. We exist in a category that's called component content management. It's also sometimes called structured content management, so this is our category. Customers buy this category of product to accomplish four things, to help them author, manage, collaborate, and publish content. So translation fits into this as well.

It's usually sort of at the end of the process, and so you'll see how we integrate quite nicely with Trados and the rest of our translation technology and services products. So you maybe have heard of content management as a category. You may not be familiar with the concept of components. What do we mean by components as tied to content management? So what we do with components, or what we help enterprises do, is to take what otherwise would be large, monolithic documents and break them up into small components. We atomize them into little logical modules, and so this is kind of a visualization of what that might look like with a document.

Another way that I like to think about this, it works for me, is you've got documents on the right being the house that has been assembled with a bunch of components, and you've got the components on the left. And I think what's really kind of intuitive and a bit foreshadowing of the rest of this presentation is that you'll immediately understand that those components can be used to build a house, but they could also be reused to build a, you know, starship or the Eiffel Tower or whatever somebody might want to build, so they can be reused. Now, you may ask, "Why would an enterprise want to move to componentizing their content?" And there's really four aspects of that.

So it's the idea of being able to write and create your content once and then reuse it many times. And so it's not only writing it, but that content might need to get approved by legal, it might need to get translated. You can do all of that once on a component of content, and then that's ready to be reused throughout, you know, different areas of the organization, which leads to this concept of omni-channel publishing, where what we're doing is we're helping enterprises assemble these content, these components, so they can be published to any kind of format, whether, you know, different screens, et cetera. And then wrapping that is a full audit trail around this whole content life cycle.

So the idea that you can manage this throughout that process, manage it throughout the process. Now, this is a kind of new way of working for enterprises. It's been adopted for a long time, but you may ask, "What kinds of content would an enterprise want to use this for?" I divided it up into a nice two-by-two matrix, where you've got high-value and low-value content and then transient and persistent content. So examples of high-value, transient content would be like hero marketing messages or, product launch information. Low-value transient would be like social media posts, blog posts. Where we operate is in the upper right-hand corner, and that's high-value, persistent content. It's really the lifeblood of the way an organization runs and conducts its processes.

Just to layer on the idea of LLMs here, we see LLMs very much acting as a potential substitute technology at the lower end of the value spectrum here. But at the upper end, we see them as copilots, and so we see it as our job to help incorporate LLMs into our products, so our customers can be more productive and more efficient. So with that, let me show a video that I think ties this all together, and then I'll be back to dive into the use case.

Speaker 18

In highly regulated industries such as life sciences, manufacturing, or financial services, content relating to a product can be as important as the product itself, or it is the product. But these industries are struggling to produce accurate, compliant content quickly enough. Their existing content tools and processes can't keep pace with go-to-market schedules, more stringent regulatory oversight, or cost-cutting pressures. How can you avoid delays or compliance failures if you're forced to deal with cumbersome collaboration between co-authors and reviewers, poor reusability of content with all the risks and inefficiencies of content duplication, and inadequate content governance controls for compliance and audit requirements? To address these issues, editorial teams are turning to structured content authoring. Structured content authoring solutions from RWS are transforming collaboration, reusability, and governance for content teams. Firstly, we're improving collaboration through an online authoring and review platform with an intuitive Word-like interface.

It requires no XML training and is highly stable and secure for concurrent editing and review, even when working with external contributors. Secondly, we're improving content reusability by making content creation modular. Individual content components are combined on the fly to create documents and other outputs for any channel without duplication. It's easy to update a component and instantly cascade it wherever it's used, so critical content is always up-to-date and accurate. Finally, we're improving governance, not only by eliminating duplication, but also with 100% reliable audit trails, granular access rights, and rules-based content creation. With structured content authoring solutions from RWS, it has never been easier to create accurate, compliant content at the speed you need and a fraction of the cost.

Alex Abey
General Manager for Tridion and Fonto, RWS

All right, great. So I think that video does a very nice job of summarizing the kind of need of a customer. What I'm gonna do now is really make it, like I said, concrete by diving into the use case of how a large electric vehicle manufacturer uses Tridion in this case. We're focused on Tridion, but you could imagine similar use cases using Contenta or Propylon or Fonto. So they're targeting different vertical markets. Tridion happens to be most appropriate for this particular market. So what you see here is a screenshot from this manufacturer's website, and you'll be drawn to the fact that there's lots of languages there. So you probably think I'm gonna talk about the linguistic challenges involved with translation, and they're, those are substantial and important, and we do a great job satisfying them.

But really, what I want to draw your attention to is the content management challenge here, because these aren't just direct translations of a single English version of a manual. Because of regional variations, things like left-hand drive, right-hand drive, different legal issues, they're actually variations of manuals. So you get this issue where you've got 85% the same content, but 15% different, but in a very mixed, mismatched kind of way. So you got a lot of different variations of this manual. But the variations get greater because what I've done here is I've clicked into the Italian language version of this particular manual. But what you'll see is it's actually not the Italian language version for Italy, it's the Italian language version for North America.

It's for a particular model year, so 2021 and later, meaning that there are earlier versions of this same manual, and it's for a particular version of software on that vehicle. So now you see that you're going from sort of 20 or 30 variations to maybe hundreds of variations. But it gets even more granular than that because this is something that you may or may not be familiar with. It's a vehicle identification number, a VIN number. Your... Every car, your car, has one of these. Automakers now want to deliver customized experiences down to that VIN number level. So they're they can they want to tie the content to get the right content to the right vehicle. So they want to be able to do that with the, with, user manuals.

This is-- you, you just looking at the user manual use case, you say that's kind of a big deal, this problem we're solving for this EV maker. Actually, they want us to go a lot further, and they are going a lot further. There's two, as I go through the rest of this, two words I want you to keep in mind. One is reuse, and the other is consistency. Where else would they want to use some of that same content? They, they want to use it in their whole after-sales experience, so customer service and the rest of the aftermarket service experience. They want to make sure they've got consist-- the same consistent content, and they want to reuse it to drive this experience.

That same content gets given to the repair and maintenance people as well, so that they have repair instructions, and again, often customized down to the VIN number, so they're working with the right part set and the right repair instructions for the right vehicle. So switching from aftermarket to pre-sales, you've still got, you know, that same kind of basic content set. A lot of automakers have, you know, a giant challenge about how do they drive a consistent experience through their dealerships. So this content is used in training, and driving the entire dealership experience. Of course, it gets used in brochures and collateral. It gets used on the website, on a mobile app, anywhere there's technical information about the vehicle. And again, it's got to be consistent.

If they say somewhere that it's the driving range is 417 miles or kilometers, it's got, that's got to be consistent across all these channels. And then it, what's happening now and where there's a huge focus is reusing that content in the in-car digital experience as well. So it drives a lot of that. The place that's a little left field, as we would say, is these auto manufacturers have a large regulatory burden now across multiple jurisdictions, especially as they're moving into autonomous vehicle sort of where that's evolving and the need to get past regulators in each of these areas. And then to sort of cap it off, because cars are software now and software gets updated, the content has to stay in sync and stay updated with that content.

And this means you've got the over-the-air updates happening, that content experience has to remain consistent. Now, where we have a great advantage is, as RWS, you have to do all of this across all these use cases in multiple languages. And so this is where our integration with Matt and Trados and Maria and services allows us to offer a really differentiated and compelling offering.... So let me summarize that this whole user experience by, or use case, by saying there's really three things they're looking to get out of this. Process efficiency is absolutely important, the ability to get reuse and efficiency from the content creation process. User experience is key, and that's about getting the right content to the right person at the right time. And then they need a governance and audit trail throughout that whole content creation process.

So now I'll zoom out a bit and talk about manufacturing has long been a place where structured content has been well adopted. What we're seeing now is a lot of growth in areas outside manufacturing. So we saw FAF was in the customer testimonial, has nothing to do with manufacturing. Five areas that we're seeing a lot of growth potential right now are life sciences, financial services, hospitality, audit and accounting, standards and publishing. And so we think we've got some great lighthouse accounts in each one of these areas, and it's. These are really big growth areas for us from an emphasis perspective. And let me go back to the portfolio we've got and try to map that to some of these opportunity areas.

So Propylon very much targeting what we call rule makers and rule takers in legislative, legal, audit, and accounting. Tridion, targeting regulated content and technical content. Fonto, targeting very much pharmaceutical as a specific use case, and Contenta being very strong in aerospace and defense. So if I summarize our rights to win. So we have established brands and really strong brands in each of these vertical segments, and it gives us great coverage across the needs of a lot of different vertical segments. That vertical focus allows us to match something like Contenta to aerospace and defense, or like Fonto to pharmaceutical, and really deliver something that matches the needs of these verticals. And then we are truly differentiated by having an end-to-end experience with our integration with, with Trados and the rest of the localization chain at RWS.

So now, quick video, and then I'll wrap up. This is a video kind of showing some of the cool ways we are incorporating AI as a copilot, what we call a copilot functionality, into the platforms.

Speaker 18

AI can supercharge your structured content processes, but how are we embedding this into our technology at RWS? Let's look at an example for an insurance company. A support agent receives a call from a customer whose car trailer was damaged while another driver was driving the vehicle. The agent first checks if the trailer is covered under the insurance policy using an AI-powered chat. The system responds with a clear answer. The agent can also see the actual policy that underpins this answer and suggestions to further navigate this topic. Next, the agent checks coverage for another driver. As you can see, this is a guided discussion based on true policy content rather than a hallucinated AI story from a large language model. We call this a trustable chat. Because the system uses structured content, the agent also has immediate access to the actual policy that underwrites each answer.

Next, they explore the process for vehicle inspection. This is also enabled by structured content. The agent is guided through the steps presented as smart suggestions. They select the relevant options for vehicle inspection. The customer gets the exact answers he needs, and the agent doesn't have to crawl through long documents to find them. But how can an organization create this highly relevant content? It's done via AI-assisted authoring. For example, qualified underwriters write up these policies but often use rather complex language. AI can improve readability by simplifying language, while the author stays in full control of the changes proposed by the AI model. In our structured content authoring tool, AI acts as a copilot that can help with tasks such as eliminate writing duplicate content, generating titles, intros, and summaries, and much more. At RWS, we help companies streamline operations with trustable AI for structured content.

Alex Abey
General Manager for Tridion and Fonto, RWS

All right, so I think there's... Our customers are really excited about the ways that AI is gonna take. In the same way as AI is helping programmers become more productive, it's gonna help authors become much more productive as well. So I'm gonna leave you with four takeaways as we move into a break here, or a Q&A session. So we have a great multi-product portfolio, and we have products that are leading in each of the key segments in the market. We're absolutely on the right path, building AI in as a copilot set of functionality. We are targeting use cases beyond traditional manufacturing, and we see a lot of upside and growth options there.

And then we're on a journey to get the products cloud native, so we're able to take them into the mid-market as well and really satisfy what we see as a large TAM there that we can go after. So with that, thank you very much, and I think we're gonna move into a Q&A.

Ian El-Mokadem
CEO, RWS

We are. Right. Thank you, Alex. So if I could ask the speakers who've just been up to step down to the front. And I think we'll take about 10 minutes of questions in this format, then we'll take a break for about 20 minutes. Obviously, we can ask questions that way as well, and then we'll come back for the final session. So let's start in the room. If you've got any questions in the room, just put your hand up, and when you do, speak, could you just say who you are and where you're from? So yeah, James, over there.

James Beard
Equity Research Analyst, Numis

... Thanks. Hi, it's James Beard from Numis. I've got one question, please, regarding machine translation. Could you talk a little bit about the competitive backdrop within that space currently? We've seen the likes of DeepL raise quite a lot of money at quite a healthy valuation in the last 12 months. Yeah, just interested to sort of get your perspective on your assessment of your product set versus their product set, other product sets that are in the market. What, why you think you're different, those sorts of things, please.

Ian El-Mokadem
CEO, RWS

Mihai, happy to take that one?

Mihai Vlad
General Manager of Language Weaver, RWS

Yeah, absolutely.

Ian El-Mokadem
CEO, RWS

Oh, take the mic. Are your mics okay? You got your mics on?

Mihai Vlad
General Manager of Language Weaver, RWS

one, two. one, two.

Ian El-Mokadem
CEO, RWS

Yeah.

Mihai Vlad
General Manager of Language Weaver, RWS

Okay.

Ian El-Mokadem
CEO, RWS

Right.

Mihai Vlad
General Manager of Language Weaver, RWS

Yeah, so, quite a few questions in that. I'll tackle, I think, two parts. We're clearly into a market where a lot of money is being invested, because a lot of competitors and players see the importance of transforming content in various languages and ultimately in different formats. What we... What we're pleased to see is that the blueprint for going to market in language evaluators, to a certain extent, mimic or copied by some of these players. We're incredibly happy to be a firm competitor or a firm provider of enterprise-grade products, not just consumer products, or not just translator-focused products. The way we go to market is holistically by combining services provided by RWS, the security acumen, the enterprise-grade product, and we are incredibly focused or focusing our R&D efforts on adaptability.

So instead of building a generic technology that we hope will be good for all the consumers or translators, we strongly believe that the answer to develop a strong, reliable business and ultimately build stickiness, is to work heavily on that adaptability spectrum. And we're not just adding one feature, but we've got, I think, maybe five or six across this spectrum that just drive the quality further, the more the customer invests time and data, and then bring our linguistic services. So what we're excited, it just makes us want to win more.

Ian El-Mokadem
CEO, RWS

Thanks, Mihai. Any other questions? Hello?

Karl Green
Director of Equity Research and Business Services, RBC Capital Markets

Yeah. Thanks very much. It's Karl Green from RBC. A question for you, Matt. I think just going back to the Trados translation memory, I think one of the columns in the live example was showing kind of the, the accuracy, the matching accuracy, 100%, 90%, et cetera. I mean, how quickly is that evolving to the extent to which, you know, an individual translator would be saying, "Do you know what? This process is taking me, you know, 2 minutes. It used to take me 20.

Matt Hardy
SVP of Products for Linguistic AI, RWS

Yep.

Karl Green
Director of Equity Research and Business Services, RBC Capital Markets

You know, where does that go from here, I suppose, more importantly?

Matt Hardy
SVP of Products for Linguistic AI, RWS

So that's a great question. So some of our customers are running it... Maria, keep me honest, 95%+ reuse from their TMs, so it can become extremely productive. Now, that takes a long time to get to. It takes a lot of maintenance on the translation memory. But yeah, that is absolutely designed to minimize the work of translators. Now, the clever piece then is if you apply machine translation to what's left, then you have potentially for the content that suits that way of working, helps you to, again, reduce the amount of work that needs to be done by humans, freeing them up to do the next file, and the next file, and the next file.

Ian El-Mokadem
CEO, RWS

Okay, first. Kelly?

James Beard
Equity Research Analyst, Numis

Okay.

Ian El-Mokadem
CEO, RWS

Sorry about that.

Katie Cousins
Equity Research Analyst, Consumer, and Digital Technology, Shore Capital

Thank you. Katie Cousins, Shore Capital. Just interested in when you're talking with clients from different industries, is there one industry in particular that are more keen to adopt a lot more AI technology, or is there some that are quite hesitant? Yeah.

Ian El-Mokadem
CEO, RWS

Okay.

Mihai Vlad
General Manager of Language Weaver, RWS

Yeah. Actually, right now, we see across all industries, clients being really keen to adopt AI, but very few know how to, which is interesting. So there's a bit of a panic moment where, I guess, at the C-level, there's a question: What's our AI strategy? How are we gonna roll out large language models? And the next question is: Who knows about language here? And so very quickly, we get a phone call. So it's interesting because it's one of these moments where we see language having more visibility at the C-level versus ever before.

Ian El-Mokadem
CEO, RWS

Maybe I'll just add, I think if, if John Hart, who runs our regulated industries division, were here, he would say that he's seen a shifted appetite to use AI in what would, you know, typically have been quite conservative industries, like life sciences, like finance and legal, as we heard from Mihai earlier. So we're definitely seeing, you know, more amenability to us saying, "Can we deploy an AI-enabled solution as part of the workflow?" Something we would have been pushing for quite some time and would previously been a bit very, very cautious. Now we're seeing a greater propensity to have those conversations.

Mihai Vlad
General Manager of Language Weaver, RWS

But then it's also almost systematically the next question, which is: Can we do this securely?

Ian El-Mokadem
CEO, RWS

Yeah, exactly.

Mihai Vlad
General Manager of Language Weaver, RWS

So it's back to really those two things, you know. Excitement about the opportunity created by AI. Everybody wants to get in.

... and the question: How do we manage risks?

Ian El-Mokadem
CEO, RWS

Yeah. Calum?

Calum Battersby
Director Senior Analyst, Berenberg

Great. Thanks, guys. Calum Battersby from Berenberg. So another question on the competitive dynamics. If we look at this from the services side, it looks like you're more vertically integrated than most of the other LSPs. So does that surely imply that you should be quicker at automating parts of the workflow, as you've shown us today? And if that's correct, does that give you a cost speed advantage, and how do you take advantage of that to take share, if that's the case?

Ian El-Mokadem
CEO, RWS

I'll take that one. I think it's absolutely, you know, the concept here. I think the combination of having the in-house technology and the depth of services capability and the in-house language platform that you'll hear about later on, it's our ability to blend those three things with software developers in-house who can help us evolve the products. Taking guide from one of the biggest, you know, language, you know, production functions on the planet. You know, we are really ideally placed to combine those three things in a way I don't think any of our, any of our competitors are. Now, that's not to say that we don't have some pretty decent competitors out there.

Of course, we do, but I think we really do have a unique product set in combination with that services capability, and that's what makes us quite exciting. And a very key part of that is having the most efficient production platform we can build, and we're very, very obsessed with that. And when Maria comes up to stage, we always joke, but Maria literally measures cost per word to three decimal places of pence. And when you get her monthly report, that's how that bit of the business thinks, and we couldn't do that without tech. One over there, and then we'll see if there's some online questions in a second, but...

Kai Korschelt
Managing Director, Canaccord Genuity

Thank you. It's Kai Korschelt. Just if we take a step back, I think the theme of AI is obviously that it's automating the translation process and, you know, we talked about sort of the cost per word or however you wanna measure the sort of cost per volume equation is getting lower for you guys. How do your clients think about in sharing that, right? Because I think the sort of...

The broader question is, if something that, you know, at the moment, even still requires some human post-editing, in three-five years' time, if the automation rate can be 99% or pick a number, you know, why, why does that not commoditize the pricing further and put pressure on the top line, even as the volume of content continues to grow? And I'm just wondering, and how do you kind of, how do you think about the cost curve, and how do you manage that?

Ian El-Mokadem
CEO, RWS

Yeah. I mean, look, that cost curve was very much part of our thinking when we put the strategy that we're now executing together. So that trend, you know, you saw the graph earlier, that trend of technology going upwards has also corresponded with a trend of cost per word coming downwards. But at the same time, what we've seen is a growth in the other activities that sit around it, a growth in content, and a growth in the other services that our linguists are providing. So probably hold that question for the afternoon session, when we'll talk a lot about what this is doing to the time that our linguists are spending, where they're spending it, the sorts of skills that we need, and that has a direct sort of, you know, feed, obviously, into the way we're charging our clients.

So, if we haven't answered it by the end of the second half, well, come back again, Kai, and we'll keep going at it. Are there any questions online before I forget about colleagues who may have dialed in?

Litza Dubleva-Servatius
Product Owner of Translation Technology, Coca-Cola Europacific Partners

We've got one question online, but actually, it's primarily the same question.

Ian El-Mokadem
CEO, RWS

Yeah.

How does it-

Litza Dubleva-Servatius
Product Owner of Translation Technology, Coca-Cola Europacific Partners

I think we've mostly covered that, so-

Ian El-Mokadem
CEO, RWS

Okay, good.

Litza Dubleva-Servatius
Product Owner of Translation Technology, Coca-Cola Europacific Partners

We'll hand over to you.

Ian El-Mokadem
CEO, RWS

So, maybe one more question in the room, and then we'll go for a break, 'cause... Just get the mic to you.

Speaker 17

Hi, yeah. Tom at Investec. Just a question on the Trados suite. I think there was a stat on the slides there talking about sort of market, being a market leader. I don't know if you've got any sort of specific stats in terms of, you know, your actual percentage of market share and maybe how that's evolved over the last few years.

Matt Hardy
SVP of Products for Linguistic AI, RWS

Well, we've got,

Ian El-Mokadem
CEO, RWS

The mic.

Matt Hardy
SVP of Products for Linguistic AI, RWS

Yeah.

Ian El-Mokadem
CEO, RWS

There we go.

Matt Hardy
SVP of Products for Linguistic AI, RWS

Yeah. We've got, let's say, several different customer segments. We don't have necessarily the share on each of those. I think we're fairly confident that we have the lion's share of the linguist market at 250,000 licenses or users accessed in the last 90 days. That's a very strong number. Yeah, for the corporate, it's harder to say, right? Because there's a lot more alternatives. It's a lot. You know, we share that market a lot more, but we definitely see ourselves at the leadership within that group at the top of the market.

Ian El-Mokadem
CEO, RWS

I mean, Trados is just, if you're in our industry, it's kind of incredibly well known, and we work very hard to propagate that. So we work a lot with university campuses, both to attract talent, but we also like to encourage the use of Trados when people are on courses relating to our industry, so that they grow up using the tools that we develop. And I think that was one of the things when we sort of, you know, revisited the strategy last year, where I think we recognized the need to kind of, you know, keep working on the market leadership of Trados. To a degree, prior to the merger, with multiple priorities within SDL, you know, one or two of the products had perhaps lost that focus.

I think what we said was, "We're gonna focus on Trados. We're gonna gradually phase out," and Matt mentioned this, "some of the other, competing products that that SDL had acquired along the way," and that's very much the strategy. Matt referred to the fact we've been migrating clients off the other platforms and onto Trados, and the go forward is very much to focus on Trados, focusing on marketing and development effort there, to make sure it remains that in that leadership position. I suggest, because it's been quite a long first half, but the second half is much shorter. I suggest we take till maybe 4:20 PM. Please go and have a cup of tea. Look at...

There's gonna be two demos in here, two demos in the other room, and please ask any other questions you'd like to during the break, and see you in about 20 minutes.

Litza Dubleva-Servatius
Product Owner of Translation Technology, Coca-Cola Europacific Partners

Hello, I'm Litza Dubleva-Servatius from Coca-Cola Europacific Partners. I'm the Product Owner of RWS Translation Technology for our enterprise. So we work with RWS since August 2019, so it is more than four years now. We started with a single Studio license in a small proof of concept phase, then moved to a digital pilot with GroupShare, and continued with the scaling project in 2021, with Trados Enterprise and Language Weaver. So we did for the scaling project, a seven steps RFP process and selected RWS, as they offered the best overall price for technology and machine translation post-editing. So we work with big pleasure with RWS, as they always listen to our needs and implement what we need. We have regular calls with the product owners.

We express our needs, and they work towards putting our requirements on the roadmap of the different products, and at some point later, we see them implemented.

Ian El-Mokadem
CEO, RWS

Okay, thumbs up. Round two. Welcome back. Hope the tech demonstrations were helpful. If you got a bit cut short, we will be carrying on after this session, which is a shorter session. So, just to sort of reorient, we're gonna focus on two things now to complete the story. We're gonna focus on TrainAI, which is all about training AI engines. And then we're Language Experience Delivery team and how we are leveraging all of this technology inside the business to support all of our services clients. So, without any further from me, I'll hand over to Vasagi.

Vasagi Kothandapani
SVP of Strategic Accounts and Head Train AI, RWS

Thank you, Ian. Hello, everyone. Excited to be here. Today I'm gonna talk about TrainAI Data Services, which is our offering to build AI systems, rather TrainAI systems. Brief introduction about myself. I'm the Senior Vice President of Strategic Accounts and also head TrainAI. I have been with RWS for about five months now, and prior to RWS, I spent a couple of years with Appen, managing a large portfolio of global accounts supporting their AI business. And prior to that, spent more than two decades with companies like Cognizant Technology Solutions and Core Logic in various technology consulting, sales, and engineering roles. I've also worked with various industry sectors like BFS, hospitality, high tech accounts, and Fintechs. Without further ado, let's get into TrainAI.

Today, most of us check our social media feeds, or use maps to navigate, or watch a movie recommendation, even type an email using auto-correction or suggested text. All of these are great examples of AI. How does AI learn? Well, it needs vast amounts of data to perform the operation it's doing for us. But what if wrong data is used to TrainAI? Sometimes, the outcome can be less harmful, but most times it can lead to some serious consequences. This is where TrainAI by RWS comes in. We provide clients with responsible AI training data. To build AI applications, machine learning models needs to be developed and trained. The AI applications can be anything from chatbots to voice assistants, to complex systems like self-driving cars or medical imaging systems. Almost all of them need to be trained.

Research says that data scientists spend almost 80% of their time on the data strategy. That's where TrainAI fits in. So we have a range of services to help in data collection, data annotation, and validation. All of this is helpful in building AI systems. Now, I'm sure all of you are hearing a lot about generative AI, which is another technology which took us by storm last year, and generative AI works slightly different than the traditional AI applications, right? We quickly pivoted to providing additional services to support build of generative AI. Along with our existing SmartS ource, our AI community, we also built in subject matter expertise and added services to collect content and data. Now, for generative AI, the content or the inputs that go in is called as prompts. So what is a prompt?

Prompt is nothing but a textual version of a command or an instruction, which is given to the AI engine, asking for an output to generate, right? Prompt engineering is all about giving the right instructions so that the right content can be generated. So we have a service to help in generation of prompts as well as in fine-tuning prompts. Reinforcement learning and human feedback. This, again, is very different than the traditional AI compared to the generative AI. We are talking about content being generated by AI, and it needs to be validated if the content is relevant. It needs to be evaluated, edited, and moderated. Another important aspect of fine-tuning generative AI is risk mitigation. We apply what's called as red teaming or jailbreaking to uncover vulnerabilities in the large language model or the generative AI.

I'm sure all of you have heard about the hallucinations that generative AI can generate, which is really untrue content or harmful content, which is not relevant. So there's a process by which we could generate prompts and test out the scenario so that the LLM or the generative AI is fine-tuned to generate the right content. And of course, language support. We are already language specialists and have services to provide locale-specific data and testing services. These are in addition to the services I spoke about in the previous slide. Let's go into details. Now, how do we deliver the training data? Here is the process.

We are looking at raw data, sometimes provided by the clients, or we collect this information, and then we have a strong community of AI data annotators or simply annotators, linguists, researchers who cover multiple demographics of, like, 400+ thousand language variants and 175+ countries. So all these folks are specifically used to train a particular type of content or data. We use our TrainAI platform to collect and annotate this data, validate them, and then feed it into the AI engines. What does data collection look like? It could be text annotations or text data. For example, assume that we are building AI engine to train a voice assistant. The voice assistant would need various intents or, for example, wake commands to, you know, kind of activate and respond back.

So there could be a series of intents which needs to be collected and used to train a voice assistant. Likewise, on audio or speech recordings, again, multiple applications where audio or speech could be used to train systems. For example, again, a voice assistant. I remember an example from one of my projects where the client specifically wanted to train the AI system for an accessibility use case, and they asked us to get data of individuals with a stutter. If a person with a stutter has to use a voice assistant, how would AI recognize that voice if the engine is not trained? So we had to specifically collect samples of voice samples from individuals and then use them to train.

So there could be unique scenarios where you could get specific audio or, you know, speech clippings to annotate. Likewise, images or videos which are used in, facial recognition, image recognition systems. Similarly, in GenAI, it's more about prompts because generative AI is pre-trained, and it generates information. So it's all about prompts and how do you use effective prompts to make the engine work better for you. That's data collection. What does annotation look like? Again, a simple text annotation could be, a blob of text, and we could be annotating specific sections of the text. For example, annotate a name of a person or a date. A real-world example could be a sentiment analysis system.

One of the systems I have worked in was used to, kind of, read through the customer comments and categorize them as positive comments or negative comments or, you know, neutral comments. So you feed in a bunch of data into the AI systems with annotated text elements, which helps the engine to learn and identify what kind of sentiments are embedded into the text. Likewise, on audio annotation, so it could be an audio clip. For example, it could be an AI used within a car or a cabin, and the engine needs to differentiate between, let's say, noise within the cabin or a child crying in the background or music playing in the background. So it could be kind of to like segment out various audio annotations.

The numerous use cases, depending on, you know, where you're going to use the AI, depending on that, it could be annotated and trained. Likewise, for image annotation. So again, image annotation is used in various applications and scenarios, pretty much to recognize objects and, you know, make the engine or the AI application feed them. So that's, that's image annotation. So likewise on video annotations. So video annotation is a great example in self-driving cars. So we are teaching the AI within the car to kind of identify objects. It could be other cars on the road or obstacles, or it could be, pretty much the road signs and things like that. So there could be multiple elements which needs to be annotated, fed into the AI so that it can be trained to identify these. What does validation look like?

Now, this is a very, very important aspect of the entire training process, right? We are talking about human-in-the-loop data QA. Again, multiple methodologies used in kind of building the traditional AI engines or models. It could be a single individual who's reading through a series of conditions and checks, and probably there's an expert who's validating them. It could be a consensus-based rating scheme where multiple people rate on certain conditions, and then it's all combined together to validate. Similarly, in a GenAI kind of a system, it could be validating the response generated by a generative AI and looking at the content, whether it's harmful, checking the facts, and of course, red teaming, which identifies vulnerabilities and, you know, if the system is hallucinating. What do our clients need today, and where are these applications deployed?

Most of the AI is being built for efficiency or productivity gains, better business models, innovation, and of course, increased productivity. We are seeing a lot of applications being built to engage with the customers and users. As I was quoting several examples, the popular ones we see today are chatbots, voice-activated systems, facial recognition systems, and of course, industry-specific AI. For example, in banking and financial services, it could be predictive analytics and risk management. In life sciences, it could be something to do with drug discovery. In education and research, it could be like related to research and development. Almost all of these AI models need vast amount of accurate and reliable data to learn on a continuous basis. It's not a one-time activity, right? The multiple scenarios, multiple use cases, multiple patterns.

There's a huge amount of data that needs to be fed into these systems so that they can learn. Now, let me talk about a use case that we executed for one of the major tech clients to fine-tune their generative AI. Their key objectives were to boost their LLM usability with domain expertise, improve the model safety, as well as differentiate their LLM from the rest available in the market. Some of the challenges they were facing included access to domain experts, ability to quickly scale up, train and manage the experts, as well as a flexible model to kind of switch on to new tasks. We deployed a solution, obviously, to address all of these challenges and meet their objectives.

We set up a recruiting and training program to identify the key resources who can come in and support the fine-tuning requirements here. We set up a secure infrastructure to prevent data breaches, as well as take care of data privacy, which is a very core component of anything in AI today. We built prompt response QA models to work on the model outputs to ensure the data is validated, as well as set up a team to conduct red teaming to uncover vulnerabilities in the system, executed on AI improvements, as well as plugin annotations. And the results, we kind of ramped up the entire team in a four week time frame. We deployed around 200+ domain experts, recruited and trained, delivered around 32,000 hours of work and successfully supported the rollout of the latest version of their LLM.

Now, as a subsequent step, we were also awarded two additional data services projects to continue working on supporting the fine-tuning of generative AI. We also noticed an interesting trend here. The client came back and also asked us to localize some of their prompts. We are already their partners in providing Language Services, translation services, and prompt generation is a key component of creating generative AI. It kind of nicely tied up to having a stream of localization work along with the TrainAI work. We see a lot of these trends coming up now where localization and the AI training data can parallelly exist. Now, why do I think TrainAI is a compelling right to win?...

We have an established capability of almost 100,000+ strong community of curated, data analysts who can help us in all of these, training data projects. We have a long-standing experience and reputation. I think in the earlier, part of the session, we, saw that most of my colleagues spoke about how AI has been built within our systems as well as being used. In fact, we have been building AI within several of our products, like Language Weaver, for like 20+ years, and we have also been using AI. So this is a unique introduction where we are launching a service which can actually help build an AI as well as use an AI. So it's a great, place to be in which most of our competitors, don't play in. And of course, a familiar route to an unsaturated market.

We are seeing natural demand coming from existing clients across industries, hence, leading to shorter sales cycles. Okay, what's changed since the last Capital Markets Day? In early 2022, we committed to growth in the data services market. Earlier this year, we launched the TrainAI brand, and also we followed up with the GenAI service offering, in addition to the existing data services offerings. Since the last six months, we have executed several projects in the areas of GenAI model tuning, data annotation, as well as search relevance, leading to multimillion dollars in revenue. Of course, we continue to invest in our marketing initiatives to improve our demand generation, as well as our cross-sell, upsell opportunities and sales support. We also continue to invest in our technology and platform builds, which are a core component of executing these projects.

What are the future developments for TrainAI? Well, we plan on going to market with a GenAI launch and, service extension. Again, we have been executing these projects for several, tech clients, and this is an official launch of the GenAI service coming up in early 2024. We continue to grow our business with industry-specific offerings. We are seeing a lot of demand coming in from specific industries, like banking and financial services and, regulated industries. So we are working on building a pipeline and identifying the use cases. Of course, we continue to build the TrainAI brand. We also solidify our TrainAI operating model, augmenting with industry expertise, both internal as well as the community, and we constantly evolve and adapt to the ever-changing needs. We also continue to improve the platform tools to drive better automation. Here are my key takeaways.

We have significant experience and proven capabilities, both in the platforms and the communities needed to deliver TrainAI projects. We are looking at a high-growth industry with ready access to clients who are investing heavily in AI initiatives. We are agile and nimble to meet the client needs today. It's a great opportunity for us to grow new stream of revenue, in addition to our localization revenues. It's never been a better time for us to be working with AI and growing our TrainAI business. Thank you. I'll hand it over to Maria.

Maria Schnell
Chief Language Officer, RWS

Hi, everybody. My name is Maria Schnell. I'm the Chief Language Officer of the RWS Group. I Language Experience Delivery, which i will introduce to you in a minute. I'm a translator by trade. I originally studied finance translation for German, Spanish and Portuguese. I've joined the RWS Group 17 years ago, and held many different commercial and operational roles, and I'm in my current role for quite a while now as well. Let me start by essentially introducing you into what we're going to talk about today. So I'll walk you through what we do generally in life, how that has evolved since the beginning of AI, how we use AI internally.

We have been for a while and are continuing to evolve that, and what AI ex-- concretely means for our linguists. So who we are. We essentially translate. We translate within the time frame and the budget that the client has given us, and we have to do so at a given quality level, and we need to produce in the most efficient way possible. What that means concretely, I'm going to use an example, is if you think about my car, my car has a multimedia interface, and I interact with that multimedia interface either by using voice commands or by pressing buttons, and I use it to access the GPS, listen to music, get warning messages, and react on that.

What my team does is they translate that user interface in the multimedia or all text in the multimedia interface. They will do that in an understandable language, so that even Maria, a mediocre driver, can actually understand what she's supposed to do. And they will also do that in a way that we are compliant with German road safety requirements. So essentially meet relevant regulation on the German market. Once that's done, my desktop publishing team will essentially refit the layout. German text takes up much more space than English text, so you may have to essentially increase text boxes and make the layout work within the very limited space that you have on a multimedia user interface in a car.

They will make other layout changes, which may include changing color scheme, changing logos that you mention or don't want to mention on a specific market. Anything that may be required to be considered relevant on the German market. After the layout phase is over, we're gonna have software testers play with the multimedia interface. What they will do, for example, is they will essentially talk to the voice assistant and make sure that the voice assistant hears my voice, actually understands that I'm talking about an address, that I want to start a route towards that address, and it proposes, like either the shortest route or the cheapest route to the destination that I want to get to, depending on the settings that have been agreed. That's what the software testers do.

My talent finding team will find all sorts of German voices, all genders, age groups, different regional distributions, so that I, a German middle-aged woman from the southwest of Germany, is properly understood when I essentially say an address. My audio video engineers in the team will synthesize this, the voice that responds to me, when I interact using voice assistance. All of that is what we do when I say we translate, so it's complex. We do that across a comparatively broad market. We have pretty huge geographic coverage through either internal or external teams. And we have a really broad language coverage as well. We cover more than 400 language pairs across the globe. We also have a very broad range of subject matter expertise, broad and deep range of subject matter expertise.

We have automotive translators, we have high tech translators, we have pharmaceutical and legal and financial translators. And our translators are also organized in verticals. We do that comparatively well, because we have unrivaled access to proprietary technology and AI. We use essentially all of the products that you've seen earlier today in Matt's and Tracey's demonstrations, and in Mihai's session as well. We also use other technology that our clients may want us to use in the context of the production process. And we also do that comparatively well because we essentially sit on a resource pool that is ours, and that's particularly the in-house team, where we have the ability to develop highly specialized skill sets that we don't want to expose to competitive pressures.

We can ring-fence that, that, resource pool, and we can use that resource pool to develop and expand that further, and we do that through programs like RWS Campus, for example. Where we have, for example, specialized, specialized, curricula to develop further regulatory labeling translators, so specialists... deeply specialist skill sets. Let me spend some time on how, how we're doing and what has changed since I last spoke to the ones that have spoken to me before. We translate about, 1.9 billion words in about 1 million projects. To just give you a sense of, dimension, 1.9 billion words is about, translating seven, Ulysses by James Joyce, the novel, about 717 times a year. So a lot of words in that sense.

We do that under a lot of time pressure. Our average turnaround time is 24 hours. We essentially get there by following the sun, so making time zones work for us, and by using technology. I use everything that development teams have to offer. About more than 60% of the content that we translate is already pre-translated from a neural machine translation engine. What has changed in the portfolio since we last spoke? Essentially, we have most growth in what I would call long-tail and mid-tail languages. Long-tail languages is rare languages. That's Indic languages, Southeast Asian languages, African languages, but also Native American, Native Canadian languages, for example. That's where we see quite material growth across industries.

What is special about those languages is you can't really automate them without having a human linguist actually document rules. In most of these languages, rules are not particularly well documented, if they are documented at all. So you need a human linguist to document those rules, essentially establish consensus about what good looks like with the supply chain, and then with the client, review our subject matter expert as well. So essentially, having a human linguist is a prerequisite for process automation, which is one of the reasons why we have put a lot of focus in growing in-house teams, particularly in those locales, and you may have seen that we've recently done an acquisition in the African market specifically for exactly that reason. Mid-tail languages are also growing.

Mid-tail languages are less rare, but the language set that is available for those languages is much smaller. That's all of the Central and Eastern European languages, Baltics, Hebrew, for example. Again, the language set is really, really small, and these languages are very complex, like very complex grammar, spelling, yada, yada, yada. Which essentially means that even if you use technology to translate those languages in, like, the most efficient possible way, even if you use neural machine translation, which is available for those languages as well, you need a lot more human input to make sure that the output that you get is accurate and appropriate for the context that you use it in. The other change that we've seen is we have a lot more requests for deep subject matter expertise.

Deep subject matter expertise when it comes to industry-specific knowledge, but also when it comes to cultural and language expertise. That's particularly true in the age of generative AI. Everybody's heard about AI hallucinations. You do need a human to identify this is a hallucination and to act on it, which in many instances is either adapting, if it's only a partial hallucination, or completely recreating, if the hallucination is not just partly. We also have a material increase in non-linguistic services versus previous years. Non-linguistic services is the stuff that I've mentioned before, functional testing, audio-video localization, layouting in some instances. This means a lot more time and material world. So the three digits after the comma, cost per word land, that's not where these services play.

This is time and material, hourly work that we can charge to the customer now. The reason why that is time and material is because those processes are a lot more complex. You will have a lot of interaction and dialogue with a client to essentially approximate how they want that content to look like, and you will have, consequently, a lot of process regression as you produce. I'll spend a little bit of time on how we work. You've seen this process alive, particularly in Tracey's demonstration. We essentially have a lot of machine that is applied, machine and technology that is applied before we even hit the translate, and the translator ultimately merely adapts.

What adaptation essentially means is you, you essentially have a dialogue with your customer, and you try to either make sure in that dialogue to make sure that you hit the brand voice that the customer expects in the target language and the target market, or you may have a dialogue with the subject matter expert that allows you to make sure that you properly understand and apply the regulatory framework within which you have to exist. That adaptation, that's the dialogue with the client review. The dialogue with the client review is predominantly text-based or has predominantly been text-based. In the growth of those non-linguistic services, we've seen a lot more dialogue about all of the other components as well.

There will be dialogue around how the layout looks, how the voice sounds, how testing is supposed to happen. In some instances, it actually may require a full-on format transformation. As in, you cannot necessarily assume that all content consumers in specific target languages can actually read, so you may have to essentially walk away from written text and go to spoken text altogether. Adaptation is a, is... can be a fairly far-reaching transformation. Despite the fact that I have technology supporting the full end-to-end process, and the box that you see here that says AI essentially indicates that there is a lot of technical support here as well, I am still worried about running out of humans on this planet. I'm particularly worried about running out of humans with the right skill sets.

They are rare, and they're particularly rare across such a broad language set, with rarer languages in the long tail and the mid tail languages. So I essentially use technology and everything the development teams have to offer for me to essentially do more with that very limited pool of qualified resources that I interact with. And at the same time, I use the in-house teams to essentially ring-fence those teams, expand and develop those teams further as a competitive advantage, and I also use them as a means to inform and optimize the technology that I use, and in doing so, create proprietary data that we use as we develop further technology solutions. Let me walk you through what else can we automate? Because there's already a lot of automation in the process. I'll take this stage by stage.

So we'll start with the translation stage. You already see that there's a lot of automation already happening, and happening for a while on the left-hand side of the table. That's everything you saw in the course of today in the technology demonstrations and the technology sessions. There still is more automation possible for a number of use cases, and you've actually seen hints at that in Mihai's presentation. So we're collaborating with Mihai's development team on improving the input into the translation process, so that there's less work for the translators to do at the end of the day. And we're collaborating with his team as well on MTQE, so the Machine Translation Quality Estimation, which essentially allows us to focus the translators.

Again, we don't have a lot of time as we produce this, to focus the translators on the areas that they need to focus on. I think one of the things that we're particularly excited about when we think about MTQE is MTQE, as Mihai and my team have built it together, is not AI evaluating AI. The evaluation has been generated with human tagging that happens as part of the normal translation process. So the AI can now recognize what good looks like and what a human would see as good output. I think if we go to the translation stage, the translation stage is already highly automated. I mean, most of the stuff that we get by now is pre-translated, so we spend most of the time actually in adaptation.

What you see on the left-hand side has been around for a while. This is essentially just making sure that linguists only type once and only type when they have to type. There are still areas that can essentially be automated further, and that's particularly true for all of the use cases where so far we've been working with very unstructured data. That's true for the research part of the translation process. As a translator, you have to do a lot of research to just validate, do I understand a particular product and service correctly? Do I understand the source correctly, et cetera. Which in most instances means you have to crawl through huge databases, through huge documents, reference documents, sometimes several documents with several hundred pages.

Again, 24-hour turnaround time, you can't really sit there and read the Bible. It's a lot of content. So essentially having an assistant that you can chat with that will find the relevant information for you quickly, even on unstructured datasets, is a real support in that situation. Quality assurance is another area that we want to focus in. This is ultimately about AI learning from linguist behavior. If you particularly deal with some of those, more complex languages, so the long-tail and mid-tail languages, a lot of the error messages that you will get in automated quality assurance will be less relevant. You'll get a lot of false positives.

AI can now learn from which error messages are actually relevant and increasingly present you with exclusively relevant error messages, so that you accelerate as you correct and fix content once you've received it. This is essentially only possible because we combine our specialized language technology with the power of large language models to just essentially help us learn faster from unstructured data sets. If we go to the adaptation stage, that is essentially the stage that is almost exclusively manual at this point. We do already have some baseline automation, particularly in the whole area of audio-video localization with synthetic voices, et cetera, but there is a lot more possible now with the power of large language models. We can essentially venture into machine-first layouting and machine-first audio-video localization, or audio-video production.

We can venture into more and faster testing automation, and we can finally find a way to translate scripts with a machine-first approach. Theoretically, you can already today try and use neural machine translations as you translate scripts, scripts for a voice-over or subtitles, for example. The problem, though, is that most translation engines don't deal very well with the visual time limitations. You need to make sure that the right text is on the right screen, and you need to make sure if you, if it is actually dubbing as well, that it's lip-synced.

So, we are now working with the development teams, particularly in Matt's team, to take an automation-first approach there and fix that visual time limitation issue for us, which will be a real game changer in audio-video localization and software localization, where you also have a visual, visual time or a visual limitation. You have a teeny-tiny button that says OK. If you have a very long translation for OK, you have a problem. So essentially, having a machine that helps you with that limitation, will really make a difference here. Let me explain to you why automation has so far been so complex... Oh, sorry, adaptation has so far been so complex to automate. It's mostly because what you do after translation is discuss preferences, discuss preferences in, in text, and increasingly discuss audiovisual preferences.

So when I say audiovisual preferences. To give you examples of the kind of preferences that we will discuss with client reviewers, we will discuss in the layout the whole how far do you refit the text box to make it fit? Because sometimes, like, if this is the size of the user interface and you have a lot of German text, you can't make the whole button bigger than the user interface. So discussing around where is the limit of refitting is one of the areas. Another area may be that you have visuals in the background that need to be adapted.

We had one client that had in an e-learning course, by the way, this is an e-learning example, a male arm in the background, and the customer said back to us, "This is too much hair for the Asian market. Remove the hair or replace the arm altogether." So this is the kind of preferences that we will discuss there. We will also have discussions around preferences around voice. We have one client where, we're essentially localizing live training for man-on-the-street defibrillators, so the defibrillators that you'll find in a stadium, for example, where you're supposed to, anybody is supposed to be able to resuscitate somebody. So what we ultimately do there is discuss with the client what voice is appropriate. In Germany, a low-pitched female voice would be deemed appropriate for that context because it's supposed to sound soothing.

In the US, it's a male voice. That would be female voice altogether, no matter how high or low pitch, would not work in that market. So we'd have to replace that voice as well. There's also preferences to discuss in the course of testing. What you do in testing is just click through the software or talk to it until it breaks. In a nutshell, you can test until whenever I retire. But you need to essentially discuss with the client what is the most relevant user journey as you essentially navigate that training course. So, those are the kind of preferential discussions that you have. I think that all of these preferences make the process long. Remember the 24-hour turnaround time? Panic. It makes the process long, it makes it very winded, and it requires a lot of process regression.

So, being able to essentially accelerate the, the adaptation to human preferences, and there is no limit to human preferences. We have a lot of discussion about hair in unexpected places with customers. So, essentially being able to automate that at scale across a lot of content and across essentially a wide array of languages, is something that is a real game changer here. And the reason why we're so excited about it is because it makes that affordable now for mass localization. So it opens up at least one, if not two, new content types for mass localization. The other reason why we need to think about this is, there's ethical reasons about why adaptation is complex.

We are definitely seeing that humans are getting better at recognizing AI-generated content, and increasingly, the sentiment around AI-generated content is negative. So people distrust AI-generated content increasingly. So we're seeing an increasing requirement for just marking this as at least human-validated or advising our customers on when is AI-generated content good enough. There will be use cases where that is the case. The other area that we need to think about is how do we use people's likeness, people's voice, and their image? Just because it's technically possible to take my profile pic and make me speak whatever you want to make me speak, please don't do that, I will be very upset. You shouldn't do that.

You need to have a dialogue with your clients about what is a safe use case as you essentially artificially animate pictures, likeness, voice, that belong to another human individual. So our clients expect us to treat their voice, their likeness, their image responsibly, and our suppliers do that as well, and that's one of the areas where we can differentiate as well. One of the things that we are noticing is that this, of course, like all of the preceding evolutions of automation, also means that the role of the translator is changing. If you look at... If you essentially remember what Ian and Thomas mentioned earlier, we have the problem of content explosion and content explosion across a lot of languages. So there is now a lot of localized content out there.

Content consumers are struggling to find the relevant content, and our clients are consequently struggling to attract the right target group and get their attention. Because of that, we have two localization pathways that have evolved. One of them is translation, good old-fashioned translation. Good old-fashioned translation is all about accuracy, it's about consistency, and it's about risk reduction. Making sure that somebody who takes medication takes it exactly as intended, making sure that somebody who uses machinery doesn't put their finger somewhere in that machinery that rips it off. That use case is still very relevant, and it's particularly relevant for documentation and post-sales content, where it's, again, all about risk reduction. It's also quite relevant in a regulatory framework or in IT services, for example. The other pathway that has evolved is hyper-personalization. This is all about engaging with other humans.

So it's about understanding who is the relevant target group, how that relevant target group consumes the content in which channel, on which device, et cetera, and essentially adapting whatever you have generated in the translation to the right use case. That is what I mentioned earlier, making sure that you have spoken text where you know that written text will not really be relevant for a target group, for example. This content—this hyper-personalization is particularly relevant for pre-sale content, of course, because you want to sell. You want to engage potential new users of a product or a service, for audio, video content as well, but also increasingly for post-sales content.

If we go back to that defibrillator, you don't want the person who is supposed to resuscitate you to go, "Let me read chapter five of this manual to figure out how to make sure that you stay alive." It's about having post-sales relevant, essentially, relevant support as and when you need it in the right language. What that ultimately means is artificial intelligence is best placed, so AI is best placed in the translation space. Where essentially you need accuracy, where you need consistency, and where you need predictability of outcome. That doesn't mean that this is an-- that this is essentially a machine-only space. Particularly for those long and mid-tail languages, you still need human support to optimize the outcome of, of AI.

Human intelligence is best placed in the hyper, hyper-personalization space, where you deal with a lot of variations. Humans will engage with humans, so humans will be able to tell you how you best engage. That also doesn't mean that this is a human-only place, space. You will use technology and AI to be able to accelerate how you adapt all of those preferences at scale and across a lot of languages. So this is all about finding out where humans versus the machine are best used. There's space for both of them, and frankly, they're better if they collaborate. What we're learning, though, as we look at how what the linguist does evolves, is that calling them translators is increasingly limiting. So we've decided to stop calling them translators, and we'll start calling them language specialists.

So what language specialists essentially do is they're humans that will localize communication across formats, across cultural contexts, regulatory frameworks, media, et cetera, and they will interact with AI as they do so. AI is a fundamental tool of their day-to-day, and they will essentially train, optimize, continue to develop as they go. There's new roles emerging as we talk about this. We have creative writing is an increasing skill set that we require in a translator or in a language specialist. There is a lot of requirements around content transformation and content optimization, and being smart about the use of language technology is a big, is a big requirement now as well. So here's the key takeaways that I want to leave you with as I go.

Production is already highly automated, and there is more space for automation in language production. AI, however, is not only an efficiency opportunity, it's also a growth opportunity. It allows us to essentially unlock two major complex content types for mass localization. And essentially, the role of the human only evolves again, as it has before. Again, both humans and machines are crucial for essentially getting the best out of the end-to-end process. Humans bring depth when it comes to subject matter expertise, when it comes to market and language expertise, and machines bring acceleration and enable us to do mass localization across a lot of content and a lot of languages. I think over to you.

Ian El-Mokadem
CEO, RWS

Thank you, Maria. Right, a few closing comments from me, then we'll have another Q&A panel, and I think we're all probably ready for a drink and a canape. We set off today with a few objectives. Hopefully, we have achieved all of those things. Hopefully, we have demystified the products that we have and the technology story that sits around them, explained the role that technology is playing in our business today and how we see that evolving, showcased the capability and the expertise that is making it all happen, illustrated the examples of how AI is supporting both growth and efficiency in our business, and outlined some of our future technology investment areas and focus areas, and giving you that opportunity to interact with many of our colleagues.

I mentioned the convictions we had at the beginning. Just to remind you, our belief here is that this is essential for us, that not playing in this technology space is just not an option. It is a quick road to an early death. We firmly believe there's a role for humans in this chain, but that it is evolving and has continued to evolve over many decades. Hopefully, Maria's presentation, in particular, gave you an example of how that role of the linguist transitioning to that language specialist is real, and the interplay between the technology that we're developing and the humans that making that happening, as Vasagi also illustrated through the TrainAI examples.

We do believe that efficiency will continue to come in that core translation component of what we do, but we do believe that there are other service areas and that growth in content that will more than balance that. That is why we put in place a lot of the growth initiatives that we initiated last year. We do believe right now is a great moment for a business like us that is equipped to play in this space with existing relationships, with clients who need answers to some of these questions right now. They are naturally turning to us, and we are well equipped to help advise them on how these new technologies can help them with their language and content transformation challenges, and we are seen as an attractive partner.

As you heard through the afternoon, you know, we're partnering with communities, we're partnering with other people in the chain, so providers of infrastructure, providers of open source models. We don't have to do everything to bring the tailored solutions in our market. We need to focus on the bits where we really add value and partner where we can. We have the enterprise-grade products. We have a great capacity for creating and validating data, which is the most important ingredient in this AI path that we're all on now. You need deep expertise to understand how to do that well and to avoid the many pitfalls that exist there. We have great clients. We've partnered with many of those clients to build the capability that we have today. TrainAI, the only reason we can do that...

Two reasons: we've been doing it ourselves for years in building models like Language Weaver, but we've also been working with the large technology clients to build their voice assistants, to build their LLMs. That's how we've learned how to do this. So those partnerships have equipped us at a moment when those things are going to grow, and other enterprises are going to seek to start training AI models for a whole host of new applications, and we are seen as an attractive partner. So we do think we're really well placed in this emerging world, and I think there are kind of three, you know, core ingredients here. There's that focus for us on content transformation.

That's that ability to leverage our significant scale in our industry to bring those solutions together, using our large language and other communities that are becoming increasingly important to build these tools. And ultimately embracing artificial intelligence and human intelligence to deliver what we believe, and we are now going to term genuine intelligence. The combination of those two things, to build, you know, relevant solutions for our clients and to help us to continue to grow our business... That's it, from us in terms of content. I think we'll do one more Q&A panel. So if I could ask the speakers who just spoke in the second half to come up, and maybe Thomas as well.

If there are other, other questions for other colleagues, I'm sure we can use a roving mic to get to Matt and Mihai, and Alex as well. Tracey, of course, as well. Are there any questions? James.

James Beard
Equity Research Analyst, Numis

Thanks, sir. James Beard at Numis. Two questions, please. Firstly, on TrainAI, can you talk to the competitive dynamics and the competitive environment that you currently see in that space? Obviously, we all know sort of Appen is a notional competitor that's had a pretty troubled time in the last year, so just sort of getting some perspective on that. And then the second question was, thinking about the Maria's presentation on the LXD, and sort of segmenting between what is sort of gonna be done by AI versus human intelligence. What happens when your customers start increasingly using, for example, LLMs for transcreation purposes?

Is that something that's a sort of relevant consideration? Is that something you've started seeing? What would be your response to that?

Ian El-Mokadem
CEO, RWS

Right. Vasagi, definitely the first one is for you, and the second one is for Maria, so.

Vasagi Kothandapani
SVP of Strategic Accounts and Head Train AI, RWS

Yep. Let me answer the question on TrainAI. One, the space is large in terms of what we are playing with. Of course, there are multiple competitors working in the space of providing training data. Appen is one, and there are several others, you know, similar to what we are trying to do. However, the scope of applications getting built and the kind of data requirements is huge. For example, you look at all the major tech clients, they are investing billions in building their AI applications. And data forms a very core aspect of you know, successfully creating those applications. Hence, there's a competitive play for multiple players in the space. And potentially, the market size is the recent...

For example, the report says that it's almost $2 billion kind of a market in the next 2-3 years. Hence, the market is huge, even if we are looking at hundreds of players coming in the market, right? To answer your question on Appen, obviously, I spent a couple of years with Appen and have first-hand experience of what we were dealing with, right? Without getting into internals, I could say that there are certain areas or pitfalls that we need to avoid for, right? One is the concentration of work.

The tech companies are the majority of the ones who are investing in AI, and with the advent of GenAI, we are seeing a lot more organizations coming forward to build AI applications, like the banks and various other industries, like the manufacturing industries, life sciences. So diversification is the key. So you don't put all your eggs in one basket, right? Like, you just don't work on a set of clients. So diversification is one of the aspects. Second aspect is also how these products or data is generated. We are looking at a community or an open crowd kind of a model to generate. So the advantage that RWS has is our expertise in working with communities, so we have been doing this for two decades or so.

Several of the new companies who are coming in, they're trying to figure out these models, trying to build something, learning, you know, falling into traps. So we have a great advantage that way in terms of having expertise in building teams, as well as, you know, kind of executing programs on the Language Services. It's more or less similar model, so we already know, what are the best practices and, pitfalls to look for. And the last one is obviously access to, kind of existing clients. Companies like, Appen or, even some of our competitors, I don't think they have the advantage of access to clients across industries. They work with very, very specific industries. So unique advantage that RWS has is our access to clients we have built relationships with on the localization side of the house, hence, opportunity to cross-sell.

So we have a bunch of opportunity areas as well as advantages. That's very, it's a competitive advantage, I would say, for RWS.

Ian El-Mokadem
CEO, RWS

Thanks, Vasagi. Maria, you ready on the second one?

Maria Schnell
Chief Language Officer, RWS

Yep. We already see clients essentially creating content using Generative AI. I have learned that that is good news for us because it essentially requires a lot of research, adaptation, and recreation of content. Hallucinations are real. They're a problem, and essentially, the kind of work that that then means for us is, again, time and material work as opposed to per word work.

Ian El-Mokadem
CEO, RWS

Right. And-

James Beard
Equity Research Analyst, Numis

One other comment-

Ian El-Mokadem
CEO, RWS

Yeah.

James Beard
Equity Research Analyst, Numis

on that. When we're talking about LLMs and GenAI right now, you're really talking about five, maybe ten languages. And sure, there's gonna be more to come, but when you compare that to the hundreds of languages that we cover, the economic model of trying to create content directly at the source language doesn't necessarily make sense.

Maria Schnell
Chief Language Officer, RWS

We, we have had customers who've tried to generate for more languages than that. The quality of the output is really, really bad, and you essentially need a lot more humans to fix that content than you would have needed if you translated that in the first place.

Ian El-Mokadem
CEO, RWS

Thanks. Any other questions?

Karl Green
Director of Equity Research and Business Services, RBC Capital Markets

... Thanks very much. It's Karl Green from RBC again. Just thinking more generally, we've talked about competition, but it strikes me that there are other industries and ecosystems which also have domain expertise different from you, and arguably, I think some of your domain expertise is as high as it gets. Are there any potential opportunities for joint venturing with other players in industries, say, for example, customer experience, where they've got a rich seam of data from, you know, voice recordings and so forth, but they've got some domain expertise as well that might be complementary with yours. So could you envisage partnering moving forward? So that's my first question, and then the second one, I think, Maria, you possibly answered it with your final remarks.

Just in terms of these mid and long-tail languages, what you're saying is basically there's just no chance anytime soon that machine translation is gonna be able to tackle them. They're just far too complex. The syntax is all over the place. Is that?

Maria Schnell
Chief Language Officer, RWS

No, that's not what I'm saying.

Karl Green
Director of Equity Research and Business Services, RBC Capital Markets

You're not. I'm glad I checked.

Ian El-Mokadem
CEO, RWS

Do that one first. Yeah.

Maria Schnell
Chief Language Officer, RWS

I will, I will quickly respond to that one. We are using neural machine translation for all of the mid-tail languages and some of the long-tail languages already. It's just you need more human intervention. For very established languages like French, like German, like Spanish, you, you don't need a lot of human intervention unless the client wants to essentially make it sound more human, et cetera. For those languages, you need more human intervention. So it is a... You already get efficiency gains, and as, as Thomas hinted earlier, if there's one thing that we're learning is that AI evolves fast, so it will get better. But some of those languages are so complex that reaching the level of French or Spanish at this point in time is something that I will start thinking about when I retire.

Ian El-Mokadem
CEO, RWS

Yeah. And look, that's one of the reasons why we made that small acquisition we announced last week, ST Communications, all about African language capability. That's a business. Back to your other question, we've been partnering with for many years, so we know Sharon and the team. They've been helping us build our African language capability. And, you know, it felt like the right time to actually acquire that capability from a competitive perspective, to give us the ability to help Sharon to expand that footprint, because we see that as an area where we're gonna have growing need for humans, even with the technology evolving at the same time.

And I think on the partnerships question, we definitely think partnership is sort of a broad theme for us now, and they always have been to a degree, and, you know, that's a small example of in a non-sort of technology area, if you like. But I think the systems integrators are an obvious kind of, you know, we're already doing work with one or two of them, where, you know, clients are looking at their technology stacks for customer experience, for in-house content management, and we're a natural partner for them in this area.

I think the other interesting area is, like, we're a natural partner for small technology businesses that have reached a certain stage, have proven a model, and then need a big partner to help them take that to market in a more scaled way. And I guess some of the transactions we've done recently, you know, with John and the team in Propylon, with Fonto, in the content management space are good examples of that. So look, I think partnering is an increasingly important skill, actually, that we're gonna need to nurture further, and of course, M&A plays a role as well.

You know, we've got the capacity, you know, where we see, you know, something that's reached a certain maturity to then bring it into the portfolio as well, and I think, again, that's an advantage that, not a lot of our competitors have. Karl.

Calum Battersby
Director Senior Analyst, Berenberg

Thank you, guys. Calum Battersby from Berenberg. So to follow up on the transcreation question, as Maria said at the end of her section, there will be some areas where AI-generated content is applicable for client use cases. So even if it's not at the quality standard today, do you have a view on what proportion of the existing RWS' services portfolio could fit in that bucket? And then, if that does take place, kind of what proportion of the work you do today you'd still need to do in the scenario where a client chooses to kind of produce that themselves rather than translate an existing piece of work?

Maria Schnell
Chief Language Officer, RWS

Well, we're currently seeing most of the use cases where Generative AI is being used and is being used successfully, is, I think, what you called the transient, like the left bucket.

Ian El-Mokadem
CEO, RWS

Yeah.

Maria Schnell
Chief Language Officer, RWS

How did you call it? Transient?

Ian El-Mokadem
CEO, RWS

Low value transient.

Maria Schnell
Chief Language Officer, RWS

Low value transient. That's where we're seeing people using that content. It's definitely fine. We already don't really translate that content. That is machine only, at best, machine first, depending on the languages, at this point in time. So it doesn't really radically change the content mix.

Ian El-Mokadem
CEO, RWS

I think there's, again, back to sort of opening up markets. I mean, think of e-learning, which is another one of our sort of growth initiatives. I mean, you know, it's now possible and economically sensible to localize e-learning content within an organization that previously you'd have just done it in English and hoped everybody sort of understood it if they didn't speak English as a first language. Now we're seeing that as a growth opportunity because the price point is viable and then, you know, other things around regulation and accessibility are, you know, forcing people to think about that anyway. So, you know, I think that's where, again, technology is sort of helping to find new growth opportunities that we're pursuing at the same time.

Maria Schnell
Chief Language Officer, RWS

Mm.

Ian El-Mokadem
CEO, RWS

Maybe one or one more question before we go out for a drink, or maybe not even one more question. Is there anyone online I should—I keep forgetting. You want to go for that?

Thomas Elliott
Partner in Investment Management, Evelyn Partners

Great. So a question from Thomas Elliott at Evelyn Partners: Do you feel the poor share price performance, is the market deeming RWS an AI loser rather than an AI beneficiary, and is it, it is simply a misunderstanding that today is aiming to resolve?

Ian El-Mokadem
CEO, RWS

Well, I'm tempted to turn that question back to this audience, actually. But, look, great one to end on, really. I mean, look, I guess the whole point of today was to showcase our AI and technology capabilities. And look, I mean, in a sense, nothing here is new, right? We've kept referring back to the strategy we launched last year because we talked about all of this stuff then. What has changed, of course, is the focus on this, and understandably, I think, you know, for investors, I can well understand how figuring out the impact of AI on all the businesses you look at must be a real headache. I don't envy you that, to be honest with you.

I hope what we've done is shown that we have a plan, that we are, you know, a well-established player in our industry, that we've been adopting and, you know, grasping technology and trying to figure out how to make it work for us for many years. That the SDL acquisition was critical and a key component of that, and what you've seen today is a blend of capabilities from both sides of the business. So TrainAI, very much from the old Moravia side of RWS, where we've been partnering with those large technology clients for many years, and then a lot of the software, you know, capabilities that came from SDL. So we've been on a path. We're on it.

We're aware of the challenges and the opportunities, and I hope what we've shown today is we've got a really credible team of people who are quite thoughtful about this, very experienced in this area, and know what they're doing. So do we have perfect foresight? No, we don't. I hope what we have done is given you all pause to think a little bit about whether we really are poised to be a bit of a beneficiary here. That's what we believe, and we know we've got to work hard to deliver it now. And I think on that note, it's time to go for another break. There are more tech demonstrations if you didn't manage to get around them all, and otherwise, very happy to have a chat in the other room. Thank you all very much.

Powered by