Your line is now open. Please go ahead. Hello, and welcome everybody out there to, next addition of our IR virtual tutorial from, Deutsche Post, DHL, in our effort to make, an experience for you, what we mean by excellence simply delivered our claim. For today's round, as you've seen from the invite, glad to have with me today, Catherine, Catherine Koenig. Thanks very much.
For, taking the time and making yourself available. I'm pretty sure it's gonna be an interesting, I don't know, 60, 17 minutes, something like that. We have introduced this format, not too long ago to have a chance to still educate you on specific topics in the group, which we think are of interest for you and, I'm pretty sure you're gonna agree with that also after today's session. Invites are already out for the next one with John Pearson and team on why ecommerce works for DHL Express. So looking forward to that one, and there's more to come.
But let's now focus on today's topic and, you see here our strategy, pictorial, group strategy as announced last fall. And, needless to say that over the past couple of months, we had plenty of opportunity to see that this strategic approach of our group is actually working out. And, one element we wanna focus on today is, of course, digitalization as one of the key enablers of the group's future success. So what what do we mean by, digitalization, well, obviously, in line with our 3 bottom lines, we want to come to a better customer experience with the help of digital tools. At the same time, there are plenty of opportunities to make, work easier and more efficient for our employees and overall increase operational efficiency.
And that's where you guys come into play. The that's gonna typically spell some sort of financial benefit. So we will take a look today at how we we are dealing with the theme of digitization. We have been, identifying 5 areas of digital, improvements and initiatives where we on a group level run so called census of excellence, which are, then sort of developing the the right and the tools and everything that's necessary. And then make sure that this is scaled up through all the divisions, which is clearly a big advantage over smaller players who only have, one division and have to take the investment, and we can much better scale that and, you see the, areas that we are covering with this approach.
So today, quite obviously, we want to talk about data analytics. And, I think what Catherine is going to show you is that indeed this, approach is one of the key enablers really to digitalize our operations and, bring it into, to life and fruition throughout the group. There are, as we will see, a multitude of solutions, whether that's benefiting the operational experience from an employee point of view or leading to a better customer experience or simply saving money. And we will share with you where we stand in rolling this out throughout the group. One more technical remark, for those of you who followed earlier tutorials, you know, the procedure you see on your free in the Q and A field.
So whenever you have questions, just punch it in, we'll be, ending up here on on my screen, and we're gonna deal with it, in the Q and A session. So and with that lengthy introduction, finally over to you, Catherine. Please.
Yeah. Thanks Martin. And welcome, also from my side. I'm really glad I have the chance to talk about that exciting topic today. And, let me start the whole topic with a short teaser.
So this teaser is an example for one of the data science solutions that we developed, and I think it nicely shows what data science can do for us as a company and what benefit it brings So what is it about? Well, it's about a problem that you probably all know from your personal lives as well. So sometimes you need to go to a place have never been before. It's an address, and you need to, find this address on the map. Could be the hotel of your next vacation destination or something like that.
And I think all of this, all of us to do this use external map services like Google Bing or here to solve that. When we look at our company, we actually face this problem millions of times every day because we get the shipments from our customers, with some address information, which can be more or less clean or structured, and we need to locate that as well. Somewhere on the map on the earth. And in order to find those addresses, we actually, came up with our own internal approach because as you would see in a minute, This works for us much better than the existing services. So I will show you now, how we leverage our unique internal data assets together with machine learning algorithms to solve this problem of geocoding.
So let me quickly just share our geocoding engine with you. So This is our geocoding engine and and the front end, that we developed. And I think that that shows nicely, what we can do here what you see on the left hand side here is a delivery address from the Netherlands as we typically get them from our customers. So now, and this looks similar to the typical, search engines, you know, we wanna know where this address is. So let's search for it and we see it somewhere in the area of Tillwork.
And if we zoom in further, we see that in this case, it's a hospital, and we get the Geo Court on a very specific point in that hospital area. Now I wanna explain you quickly first, how we do this? And then secondly, why this works for us much better than the existing services that are out there? Well, I mentioned already, we have millions of shipments that we deliver every day. And, whenever we do that, we actually capture the Geo code.
Of those locations. And this is what you see here now on the map. So all those blue pins, I have tried the deliveries in that area. And this is the base for our solution. So we use that data and we apply machine learning algorithms to it to structure the data, to match it, and then to determine the final to your code.
Now, why is this internal approach with our own data so much stronger. Well, the advantage is that the information that we use is much more relevant and specific for our purposes. So this is a hospital, as I just said. And very often, we need to deliver to large buildings like hospitals or malls also. So And you can see here that our data knows exactly the location where the query needs to go to.
So it wouldn't point us to the main entrance or the center the hospital, but really through the courier entrance, so to say. And an external service, as you see it here, with the red and yellow flag, just cannot know this information. So this is just much more powerful And you can imagine in countries where we don't have a structured and address system in place, this becomes even more beneficial for us. So this was an, quick example. So and a teaser.
So let's get back and start with the actual, yeah, content and look a bit more in into the topic. And we wanna do that based on deep dives in four areas today. So we wanna start to look into the, yeah, big opportunities that we see and that we have in the area of data analytics. 2nd, we then wanna share with you our understanding and our approach to analytics with a typical project. Cycle.
Then we wanna share further use cases and especially talk about the big initiatives we are running currently in the area of analytics and the benefits that we, were that we get from those. And then finally, we will share a bit more about our approach on how to scale a topic like analytics into a large organization like ours. So starting with the big opportunities. So what you see, on the next page I think it's, it's not new to you. So those are for our 3 bottom lines.
And, I think the T. O. Coding has already shown you that, analytics can bring benefits in many different ways. And, when we look at those bottom lines, analytics can clearly support all of them. And let me just go through them based on the example of the geocoding that we just so.
So clearly, if we have more precise Geo codes, we can do a better route planning and we end up with shorter routes. So this leads leads to clear cost savings, but, of course, also reduces our CO2 emissions that we produce. But also when we look at our queries, right, they have a better tool support. They get more precise routes they have to do less detours, and they can deliver better service. And this finally, when we look at our customer pays off for the customers because they experience this better service quality that we can deliver to them.
And we will go through further examples, along the presentation and you will see that all of them will address at least one of our bottom lines. But before we do that, I wanna have a bit of a closer look into the financial as back and the investment, of choice for timeline. And what you see on that page are our main tool cost blocks that we have. So the staff cost and the transport costs. And when we had to look at all the analytics initiatives and projects that we ran, we saw that almost all of them address at least one of those buckets.
Anna can, maybe quickly highlight that on one very concrete example. And this example is about, volume prediction. So if we have a data driven No. Ma'am, yeah, if we have a data driven, forecast of the volumes that we expect to handle on in our side, and the volumes that we wanna transport on our routes, we can optimize resource planning and also to transport routes based on that. And we've seen in the project that we ran, that that can lead to significant reductions in both areas.
And this is also why, and I think Martin said it in the beginning, Analytics is one of the substantial pillars in our digitalization strategy and would have a significant, contribution toward and, I mean, you do see big numbers on that chart, but, today, it will not be about, sharing with you concrete numbers on savings use case by use case, but it's much more about understanding how we really enable an analytics culture how we create a mindset shift because this will really enable our organization to manage, yeah, the company better in many different ways and aspects. Alright. And, yeah, to illustrate that to you, I think I will now talk a bit about, yeah, our view and analytics and maybe our approach to the topic. And, to do that, let me maybe start by sharing with you our understanding of analytics because I think that's quite important. Because what I what I often see is that there is an impression that analytics is purely about taking huge amounts of data flowing that into some magic tool, running some fancy algorithms, and then automatically you get exciting insights that are course business relevant and value adding.
Well, our understanding and our view of the reality is a bit different. And, actually, in our understanding, we start the other way around, and you see that illustrated on the left hand side. So For us, analytics always starts with a very complete business problem and a question you wanna solve. Taking the volume prediction, I wanna know how much volume I need to solve in my sorting center tomorrow. Only then you start collecting data, data which can help you address this question, And that can come from different sources, can be in different formats, and then you start the whole process of cleaning, processing the data, and you run your mathematical models, but you always have the business question in mind, and you always combine it with the business knowledge and the expertise that you have.
Because only then you can really come up with valuable insights in the end. And we will go through an example, a bit more in-depth in a minute. But I would also quickly like to, yeah, show you a bit what is inside this, this machine that you see on the left here. And on the right hand side of the screen, you see one way of classifying data analytics. And this is a quite common framework, so you might have seen that before.
But let me quickly go through it. So, it starts at the bottom left. With descriptive analytics. So this is really about understanding the past and understanding what has happened. So the typical business intelligence or dashboard type of activities.
When we then start looking into the future, trying to make guesses about what will happen, we are then in the predictive area. You can make all sorts of predictions, right? So you can predict volumes You can predict customer behavior, demand, or you can even predict the Geo codes as we've seen it in in the beginning. The next step after predictive is then the prescriptive analytics. So this is about optimizing your processes based on those predictions.
So this might sound complex, but that's an simple example. So we have a prediction of our volumes in our transport network. Well, based on that, we optimize the transport routes for the next day. And the descriptive area is an equally important part for us. But in the following, when I will talk about data analytics and data science, we will refer to the predictive and really the prescriptive analytics applications.
Alright. So, now we have a common understanding, of analytics. What I would like to share with you now a bit more in detail is our approach to a typical analytics project because I think it has some quite interesting insights that we see along that process. And those general six steps that you see here they are applicable to more or less any analytics project that we run, but I wanna make this again based on a concrete example. Which is the operational volume prediction at express.
Which is actually referring to the the daily challenged that our guys in the global express network have to deploy the right capacity on a given the the flight from, from a a hop to to, somewhere in the region or network to have an idea of what's above the volume we're expecting on the next day on a given day, and then what type of capacity planning do we need to come to. Right?
Exactly. So it's for the next couple of days, and it helps us to steal this capacity in two ways. On the one hand side, we can make sure we have enough capacity in the network. On the other hand, we can also sell excess capacity. So it's really an important input for all planning that that follows then.
Okay. On selling excess capacity, not not the usual problem these days, but, in all the times. Perfect. Okay. Let's get through it.
Good. So we start with the first step, the problem definition. I mentioned already, earlier, right, that we always start with the problem. And this is what this first step is all about. Really getting a very good understanding.
And, yeah, Martin already shared a bit what it is about. Right? So it's really about predicting the volume on each flight in our flight network for the near future. With the 2 clear objectives that we have steering our capacity for quality and for, yeah, selling access capacity. And this is so really the very first step having an understanding of what it is about.
2nd step is then the process understanding So here, we really wanna understand how the model or the the algorithm that we build will fit into processes because only then if it's fully integrated in the end, we will see benefits from the work that we are doing And here, the data scientists typically, you know, they join the operations. They talk to users of the tool and so on. So it's a quite important step. And here, for example, we learned that there was an existing tool and a process already in place, so a tool with a rather simple forecast but that was really good news because we we knew that there was not a big change in process or also change management with the people that we had to do. So that was was quite helpful in that case.
So then we can get to the, third step, which is data collection. And here, you can really get creative and think about all potential data sources that could help you answer this question and, you know, to I have to clarify that already up front, there's always a lot of ideas about fancy data sources, and it's good to think about them. But what we've seen in this case but also in many other projects, the most important data is really your own historical data. So you wanna predict your volumes Well, then you need to have your historical volumes as granular and as far back as you can have. But of course, there are other, important data sources like information on holidays or the flight plan that you connect in that phase.
Now that brings us to the 1st, for step, the data cleansing and pre processing. And this is quite an important one because here, the data scientists really sits very closely with the business experts. And here it's about trying to understand patterns in the data and For example, you see an outlier in your data series, then you wanna understand the business reason behind so that you know whether you would take out this outlier, whether you average it out, whether you leave it in. So this is really very important, very interactive between, well, data science and business, just to give you one concrete example again for the express case, heavyweights were a big topic in that, in that part, right, understanding whether they are part of this metadata, whether we wanna take them out, whether we wanna forecast them maybe separately to avoid distortions. So this is what this is all about in that case.
Alright. Then the modeling step, comes and I think this is quite interesting because when you think of data science, you think it's all about modeling and all about what this guy here on the on the picture is doing, but actually it's only step number 5. So there are a lot of important steps yeah, before that. And why, well, in this case, in this step then, the data scientists really combines his methodological knowledge with all the business information and all the business knowledge that he or she gained so far. And then, a quite creative process starts because data science, well, it's called science, but sometimes it's more an, because there's not a there's not a book solution to it.
There are a lot of algorithms that that you can try out, play around with, and it's all about integrating this business knowledge they do found out in a smart way into your model. And again, I can make a quick example here. I talked about holidays. And I think it's quite obvious that holidays do have quite an impact on our volumes. But what we found out, for example, is that holidays in Australia do not only impact the volumes in and out of Sydney, but they also have an impact on our flights from Hong Kong to like Cyprus.
Example. And those complex interdependencies need to be somehow taken into account in the model. So this is a little bit, glimpse into what happens there. Now once we are done, well, you're never done with modeling, but once we found the model we are happy with that has promising results, we can get into the last step. So that's the implementation and really the business process integration that has takes place here.
And I already mentioned that we were quite lucky in this case because we only had to integrate our algorithm into the existing tool. And what we saw then was, well, on the positive side that the improvements that we, saw on the theoretical side when we looked into the modeling also turned out to realize in, yeah, in the practical or in in the life environment So we saw improvements of three percentage points in the forecast security, which is huge in that area. But of course, there we also found in the initial phase some smaller pain points, which we then of course took back to the model, fine tuned the model to, yeah, to get rid of those. But yeah, now model is up and running and, integrated into the business. So I think this hopefully gave you a bit of an understanding how we see and how we approach data analytics, which I think is quite important for the following part.
And I already shared some use case examples along the way, but I wanna spend a bit more time, on further use cases and especially are we the big initiatives, that we're running in that area. And, yes, what you see on the next page, those are really the 6 key analytics solutions that we are developing, and working on at the moment. And of course, you know, there are a lot of other very specific projects that are driven from different teams, but those topics that you see here are really the ones that have the highest relevance and the highest and pretend across our divisions. So this is why we develop those really into group wide solutions with a very high degree of, scalability. And given they are really important for us, I would quickly like to go through them.
And you can, split them into two groups. The first three topics that you see here, they are really, addressing our core processes and optimizing them. So the first one, the operational volume prediction. So we've just seen the example of the press, yeah, sector forecast, which is exactly this. So this is just a key for our business to plan our resources and our sites to plan our transport assets.
And every percentage point all the, yeah, subsequent planning processes. And the vision that we have here is really that you know, each planner in all sites and all facilities has such a data driven forecast at hand that supports them in the planning steps. The second, topic this task scheduling, this is kind of the natural next step, I would say. Because we do know our expected volumes from the prediction, and we also have a rough shift plan in place. And now it's about the short term planning, the exact, shift scheduling, determining who works where, when considering sicknesses and breaks and all those things.
And this is a super complex optimization problems, and, our staff dispatches in our sites They spend hours with their task every day. What we did here is we developed an interactive optimization tool that supports the dispatchers in the process. So that gives them more optimal solutions, but also helps them to run this process much faster than they than they before. And again, we want to have all our dispatchers using such a tool support. So that brings me to the 3rd topic which is routing optimization.
And, yeah, routing is, of course, not you, right? That's the core of our business, and we have been doing that for years and decades. But what we see here is that there are really new trends coming up because somehow, yeah, our world is changing faster and We see that the requirements of our customers, but also from our operations are changing faster. And we wanna we are change our routes more dynamically. And, what we've seen here is that, we are existing software that we used and tools that were in place, we're also not able to cater for those requirements and flexibility needs.
And on the other hand, we saw that, well, internal solutions, they have that flexibility and that they can be really tailored to those needs. And, let me go a bit more into the details of routing. In the page that you see here, we could really draw for any of our solutions. So typically, there are a lot of education areas and a lot of projects below each topic. It's not one project, but a multi queue application areas.
The picture for routing you see here, so, we have application areas in many views. A lot of them are really focusing on last mile, of course, because this has a huge, yeah, cost liba, it's a huge cost driver for us. Our vision for routing is really that we leverage our business expertise that we have and our method knowledge to develop truly customized tools for our specific problems. And that through that, we are able to really, yeah, address this flexibility needs that we see. Now this might now still sound a bit abstract, which is why I am a broader very complete project, from the space of routing, and this is the Raptor tool that we develop for freight.
So the Raptor tool supports our dispatches in our freight terminals in that daily work. Those dispatch every day get a 100 or 100 of orders, which they need to assign to tours and delivery vehicles. And when they do this and you see that, in some of those CLO boxes, there are a lot of constraints and a lot of things They need to take into account vehicle sizes, capacities, stackability rules, driving regulations, So the optimal space for them simply explodes and it's it's impossible to solve that, for a human being in an optimal way. So this is what Raptor now does for them. So the algorithm takes all those constraints into account and comes up with an optimal delivery schedule.
That this dispatchers can then use. And Breptar does that in seconds. So that brings me back to to the bottom lines. We we talked about earlier. So you see, first of all, that this is a huge support for our us, right?
They save a lot of time, that they can spend on other tasks, like, finding even better deals for the routes that we, we came up with
And this solution is being derived in seconds. However, using, like, in the process that you described earlier, all the input you had from previous observations, what roles, what what a role does a local holiday play? How, how are Wednesdays compared to Fridays, what's the current traffic situation, all these sort of things, right?
All these sort of things are taken into account in the algorithms, and of course, the development was not done within seconds, but now the algorithm really runs in seconds whenever you give a new input. You get a you get a new solutions. I'm out for that.
And then there's probably also something like a learning curve for the further fine tuning of the algorithm in the in the course?
Yes. Definitely. And I think one of the panks of the algorithm is that we can really, integrate, or include new constraints rather quickly. I can give you one example for that, for one of our customers. So we typically roll it out customer by customer.
So we have one customer with a transport network that, had a lot of shipments from Eastern Europe into Germany. So what we'd rather do, calculate the shortest routes, with all the constraints, And if we travel, of course, also through non EU countries, then we learn from the customer that they absolutely wanna avoid that. Because if customs processes, that's effort, that's time. So then, this for us was a new constraint. So the team took that back and could include that into the algorithm within a few days.
And this is what I what I meant when I talked about this flexibility that we need because you won't find a software has all and everything included. And with the internal capabilities, we can just quickly adapt it and that's the truly customized for, yeah, for our custom Okay.
So your team is then also in constant exchange with the the actual users of the the tools.
Yeah. Yeah. So the the team spent a lot of time on-site really with the dispatchers. And in the initial development phase, it was more or less like a constant line that they had So this patch has run something. They said, ah, this doesn't make sense.
So we looked into it and iterated through it. And I think this also helped the dispatcher to really gain trust in into the tool and into the work that we are doing.
That's important. Yeah.
Yeah. Alright. Now a quick look at the, the other 3 solutions, which are our so called advanced data services. So, here we really leverage our internal data to generate new benefits or new insights. And the first topic, we've seen in the very beginning, the geo coding.
So here, we leverage our internal delivery data to come up with geocodes for new addresses. And the second topic, the product specification for customs, that's based on a pretty similar idea. So, yeah, what what is it about? Whenever we send a shipment cross border, and you can imagine we have a lot of those shipments, we need to determine the customs code for this shipment. So this is a code that specifies exactly what's inside the shipment.
So that could be a cotton t shirt for men in red or something like that. And this process today literally is still done either manually by the customs agents or externally, via brokers. And as you can imagine, both is quite some effort in terms of, yeah, time or money. But again, here, we do have data assets that we can leverage because we have a huge database of historical shipments. And those shipments, they all have information like a turn and value and other information.
And, of course, for these troubleshoot matters, we also do have the customs code. Attached. So we can learn from this data. Again, apply our machine learning algorithms and then predict for each new shipment what is the most likely customs code? And if you're, yeah, if you remember the process as it has been in the past, This has a huge, potential for automation, of course.
Now, the last topic in that area is the invoice overdue risk prediction. So that's a topic from the finance area. And again, I wanna go a bit more into, into detail here. So Again, we do leverage here data that we have. In this case, it's our, internal accounts receivable based data.
And we use it really on an invoice level. And, we come up with algorithms to predict them for any new invoice the risk or the likelihood that it will be paid late on time or even early. Now this prioritization in the past has been done either manually by a collections expert or based on a set of fixed rules. What we now have is a machine learning algorithm that takes into account 100 of statistical features, and it's constantly learning and adapting. And we've seen that this is a really valuable input for the collectors and the collections processes.
And, since the I will ask for, as recorded, is in place. We've also seen huge benefits for our cash inflow processes at steering, And of course, also during the, corner situation, this has been a very valuable input.
So you're filtering out those invoices that we have sent out, which are running a relatively higher risk for not being paid on time. But not simply by only taking past payment patterns, but also taking into account state of the customer industry and other other factors. Yeah. So it's not not only about, how well this specific account is doing, right?
Yes. Exactly. It's, as I said, it's, 100 of really, yeah, variables and characteristics about the invoice, the customer, the industry, past behavior, not only looking back few weeks, but really looking at trends and doing extrapolations. This is all what happens in this, yeah, quite sophisticated algorithm and what it's bit all this really forward invoice a risk flag? Is it high?
Is it medium or is it low?
So that means all the the the sales guys out there, we were taking care of that their customers have a clear list, focus lists, which customers to, contact and call and make sure that they are aware of that and don't waste that time on more secure case
Exactly. So it's a, yeah, very crucial input now for the collectors and for, yeah, for prioritizing that day. And maybe that's a good moment to also show you how that looks in practice, right, because I talk about machine learning or the statistical features that sounds the abstract, but also fancy. But, I can quickly show you how that really looks like, in our daily work. And to do that, let me quickly share my screen again.
So, what you see here is a, dashboard, that, collector at DGF is using to plan and prioritize the day And, as you can imagine, the data that we show here is no true and real life data, but it's test data that we set up, for training purposes of our collectors. And what you see here is, a list of customers for one of those collectors with all sorts of information. This is what they had already in the past. What is now new is this colored column here, the IOR column, that assigns this risk flag. So medium, high or low risk, of, yeah, paying late.
And you see it's really straightforward. It has the color coding. It has 3 categories. So at the first glance, it already helps you to, yeah, to prioritize where to look at first. Now if we look at one of those customers, a bit closer, so let's take, the second one here, the customer with a high risk flag, you get even more details.
So you see here all the invoices that are attached to the customers. Again, all with a score here, this is really high risk customer because because each single invoice gets a, yeah, a high risk flag, but you can also see here then the actions that are triggered based on that some of them, automatically, some of them, of course, are derived by the collectors. And Yeah. You know, you might now be a bit, I don't know, disappointed, but you might have expected to see something more fancy because the algorithms behind dedicated. But, what you see here is really doing exactly what is is supposed to do because we have those algorithms.
And the results of those algorithms are really fully integrated into our business processes.
So I think what what our specific audience does find fancy is, what is the actual real life impact? Have we seen any any improvement or any positive change to the, the collection success?
Yes. We have. So, for DTF, in the cash inflow, we've seen a positive in impact of a double digit €1,000,000 figure And, also on the EBIT side, this lead led so far to a single digit million, improvement that we've seen. So that on the financial side, but of course this comes with a lot more of automation potential. So we, for example, or DTF, they now installed bots that are, so the so called digital twins that are treating some of those customers with a very clear pattern automatically.
So this is a huge efficiency gain on top. We could reduce our bad debts because we really identify problematic customers earlier. So there are a lot of a lot of benefits to that.
Right. A retentable and relevant.
Yes.
Very good.
Yes. But I mean, maybe one comment, also, to make on that. Of course, it lapsed or it's it's had to come with with quite some, change management and training we had to do because, of course, we are now, well, providing a, at first glance, black box to our collectors, and we had to build up trust with them because they should rely on something. They didn't know what it was about. And we did a lot of measures in that area.
One nice one was that we took a set of new invoices, we asked the collections experts to classify them into a high, medium, low, and we let the algorithm do the same. So in the end, the algorithm was better, and this was one way that really had to overcome the skepticism. And and now the collectors really see that as an, yeah, as a support, which it should be, right, because it helps them really focus on on the key things.
So often you have overcome this first wall of resistance because these guys, these collectors, I mean, they've been doing that for years, and obviously, that they are their starting point is, well, I know my customer's best. Right?
Exactly. So this is this is often, and I think that's a valid skepticism that you can have in the beginning. And if the machine cannot beat them, well, then it also shouldn't. And very often it's, I mean, this case, it was them clear, but very often it's also not really about replacing what they have been doing in the past, but providing input to them that they could can really focus on the things that the machine doesn't know on the local expertise that they have. So it it should, I don't know, very often it's more valuable input that they can can use to then do the last 10, 15% even better.
Alright. Good.
Okay. So those were, I think, quite some examples that showed you, yeah, how we leverage analytics in our, daily work and how we integrate it into the business. So, yeah, let me now finish by talking a bit about our approach to really scale this topic of analytics into our organization, which is not as, as trivial, as it might seem. Starting with the organizational setup. So what we have established here is a happensvogue approach.
Because for us, it really combines the advantage is of a central and a decentral approach. So we have our hub, which is our central data analytics center of excellence, So in this team, one of the main things we do is we are developing those group wide solutions. So the 6 topics I just talked about that are applied in every division. But besides that, we also act as a talent pool for the organization. We really bring in the latest methods, best practices And of course, we try to connect the data scientists' community across the company.
The central team is then complemented by, yeah, divisional or functional sports. So those are smaller data scientists teams. That sit within the business. So that could be an ops analytics team within PNP, or it could be a dedicated finance team. And those teams, of course, they are much closer to the business, and they bring in the subject matter knowledge.
So just looking at the invoice over to risk topic, that was so valuable to have a dedicated finance data scientist there who knew the finance processes, the finance terms, rather than trusted data scientist who has never dealt with finance topics before. And this is really the benefit that we see. And very often, we combine also, projects with, data scientists from the hub and the spoke to really get get the best of of the two words out of that.
So, and and that's also addressing one of the questions we we got in already from Alex Wering from AD. So this center of excellence and you're part of more of the corporate center.
Yes.
And having counterparts in the divisions. So it's basically an offer to the divisions. The tiny but still cost of this central group is not born by the divisions, but they don't have to have any, fear of her. If I try, this is only gonna cost me. Yeah.
So how how do we decide on where to deploy, you know, these these tool and techniques, who who's making that selection? Is that coming from the divisional side?
You mean the moral prioritization of topics we are dealing with? Well, this is an maybe I we can jump to the next page because this explains very nicely how this is done because it it comes mainly from the because we have a very clear set up now and in terms of prioritizing that, you were right. Like, we are offering our service. Of course, we are also making an yeah, an estimate of whether this is, you know, valuable or like a, yeah, impactful beneficial topic and that has a lot of aspects and criteria to it. But of course, also the the divisions are prioritizing, the topics they wanna work on and they're they're key, yeah, key focus areas, and then asking, hence, for for our support on those.
And I think you, you can see that in the governance that we set up for steering the whole, yeah, scaling of analytics, because we believe that it's, analytics has to be fully embedded into the business. So you cannot purely drive that from central perspective. You have to have it embedded in the core functions. You have to have people there who know the topic and who embrace the topic. So what have we have we set up on that?
So it's 3 main pillars. So the first one, that's the analytics executive sponsor within each division. So this is a designated top manager. Typically, a divisional board member, who is driving the topic into their organization. So they are on the 1 hand and ambassador for the topic, but they are also really possible, for defining analytics road map for executing and implementing that.
So road map means very clear use cases. So what are your top 5 cases you wanna run this year? Which ones can you do with your own teams? Which ones do you want our central team to support? Out with.
So this is like more or less the process for prioritizing that within, within each division. The second element of the of the steering that we have is the analytics Coalition that, we define in each all, that we establish in each division So analytics is a cross functional topic. So it has to be driven, from 3 parties more or less. So, clearly from business, because if business doesn't want it, doesn't commit to it, you will never achieve anything. So it's business, it's IT, and it's the analytics function.
And in this coalition, the senior leaders from those 3 functions get together to really lift this, they are shared accountability. So they pragmatically decide on prioritization roadblocks, how to handle them, so to really accelerate the whole and the whole execution for that. And then, of course, we have the 3rd element, which is the group wide element, our analytics steering board. So in that board, which is chaired by Frank Apple, the division sponsors get together. So they track and share their road maps So the divisional romance that everyone defined, but they are also quite open in sharing learnings and failures.
So we're really trying to help each other, yeah, progress on that journey. But what we also do there and that is so the second part of of your, of the answer to your question, there we then prioritize those cross view cases. So the solutions so which should be solution number 7. This is a topic that we then discuss in the board with all the divisional sponsors, of course. And then lastly, what we do in the board is, we talk about topics that, you know, we should drive from a group wide perspective.
And there's one quite nice example for that and also very important one, which is about education. So we discussed in the last 4, the challenge of capability building and upskilling, which is quite still a challenge, right, in our in our huge organization. And we decided on a set of trainings that we wanna develop. So our is now developing trainings for different target audiences and, our first e learning. So that's a what basic awareness training, models for every employee that will go live in the next months.
So those are the topics that we could drive down.
Okay. Good. So what would you think if you were to think of the whole digitization process that we that that we want to go through now until 2025 with the 2,000,000,000 spend and the benefit. If I I think I have to think of that as a sort of an curve. Are we basically done by now with all the preparatory work in sorting out the the details and now we're really aiming to increase the actual number of use cases out there?
Yes. I think that that's a fair assessment. So I think we are now really at the at the point where we can where we can do the scaling and really, like, yeah, transferred largely into the organization. I mean, we have been dealing with the topic for now 5 to 6 years. So the foundations are in place.
We have a significant amount of people there. Of course, we are not done yet, but this is really I think the point to make it large, and this is also why we have this, yeah, established this this governance now because it helps us accelerate and really, yeah, translated in into the organization.
Good.
Okay. So, I think with that, yeah, we are, yeah, at at the last page and, and a short summary, that shows our our holistic approach, through the topic and some of the elements you you see here, we've talked about a lot, or I've talked about a lot, right, the use cases, the data scientists the people, our approach and the scaling, but there are also other very important elements which I haven't talked about yet. So of course, to really scale it holistically, they are very important and able as we need to have, yeah, ride and we need to get in place. So it's the data infrastructure and data governance, which we are also spending a lot of, giving a lot of importance to. But also the education part, which I just briefly, touched upon, right, really teaching everyone what data analytics is and what it can do But together with all these elements, I think we have shown you that, yeah, we have a setup in place that can really help us to, yeah, generate significant, benefits on the financial side but really also for our employees and customers.
Great. Thanks, Katin. At this stage, for taking us through the slides, I think, a very, thorough introduction of how how we are approaching the whole theme of data analytics and make it deployable and usable within this group. A couple of questions, we we have, we've become in the meantime, but one, again, from, Alex, but also from Unibah, from from BOFA. It's, is is this something that we are developing internally with with your help, that you have to do internally, or is there any sort of off the shelf third party software or solution available that you can buy in the market?
And if so, kind of what the question, how do you think the 2 approaches compare?
Yeah. I think there's no, one fits all answer to that. So think there are certain processes that are just that are just so much core to our business and key to us, like the routing I talked about, like the Tivo calling where we definitely, get better results and, the better approach here clearly is to develop that internally. Right? Our core competencies, we have we can constantly adapt it we can bring in our logistics model.
So we have seen that in a couple of examples for routing that whenever we try to go internally with the vendor with an existing tool. At some point, we failed because there were there was one requirement that we couldn't cater for that the tool couldn't cater for. So the things that are really close to our core, we develop internally and for some of the cases, we also, you know, went out to to benchmark ourselves. So we, you know, we ran pitches against like, experts providing those solutions. And in all the cases, we were, better or at par with those.
So I think it shows that it's It's really also not just the belief that it should be close to us, but we are also really, yeah, really better there. There are other topics, where, you know, there are really, there are good tools out there, and it's it's maybe a topic that is not so close to our core. If we think about topics like, you know, natural language processing, so it's converting voice or text into structured formats and then doing algorithms with it. On the tools that translate the voice or the text, into them, will they test the format and all of that? They are really good providers out there.
They, I think we don't have to read the recommend the wreath, but rather pick on what is out there and then maybe use this as an input for then, next step or the next level.
Okay. But wherever one of the core input elements is our own collected data, probably the more unique and and useful it gets.
Our own data and also our own knowledge. So let's hold true also for this volume, prediction. Right? There are a lot of companies who offer, like, it's time series forecasting more or less. But we understand our processes.
We understand the interdependencies of the logistics flows and all of that. And this allows us to treat the models in a way that they, in the end, they are, result in, yeah, that outputs.
Which brings me to one question, which was also raised by, Muneva and probably comes to many, many's mind, you talk about predicting and taking past data. I mean, we've just been through a period of massive disruption where, on on all levels, a lot of former rules, somehow no longer apply. How much of a shock has the whole corona situation sent to the workability of your models.
I think it's a bit different depending on the time horizon you look at. In this invoice over your risk, for example, we saw because the model typically looks back only, you know, as a very short term and those algorithms, I mean, they are not not the machine that is learning, but they are constantly, well, adapted. So the parameters are recalculated every day based on the recent history. And here we've seen that the mods could really quickly pick up those changes and the trends we haven't seen problems, but we've really seen that it worked pretty well. When we think of long term predictions, why we wanna predict annual volumes, of course, here, we have to see how we handle this year.
And this is exactly then this outlier question, right? What do we not do? Do we do we average out 2020 by the last 3 years average volume. So this has been also something that you have to discuss also closely with the with the industry parts and see how you, how you handle that. But this is really more for the long term, prediction where it's, where it's a problem.
Okay. And, so yes, we're the largest player in the world, but there are other large players as well, are you aware to, to what extent, our peers are following a similar approach? Is there any exchange even?
Yeah, I wouldn't call it, exchange. I think, yeah, all of our competitors are looking into those topics, taking the topic of routing, for example, or the, this is optimization topic and operations research has 1 large conference every year. And of course, all our competitors on the conference and we are fighting for the for the best talent there. So I think it's, definitely everyone looks into that. I think, what is what is special about our approach without knowing how they do it, but I think for us, it's really this close interlinkage between Well, the data science part, but not from an academic perspective, but really closely linking it to the business from the very beginning and combining, well, our logistics expertise with which is our asset, right, with those new technologies.
So it's probably the advantage is in in this high sense. The multitude of the divisions, A, we can use data inputs, not only from forwarding, but also from express supply chain, what have you. And I would think also the other way around, I mean, you you said you're running 100 plus data analysts I don't think that there's a big number of forwarders out there having access to that number, of of resource. Right?
Yeah. Exactly. And not only that we can take the data of body visions, we can also once we found a good approach, we can easily roll it out you know, not only we develop something for express, we can we can transfer the algorithm and the intelligence behind who the other divisions.
Alright. Good. Just a a quick reminder before we come to the last question that I have received so far, if you want to place any of your questions, simply punched in there. Think one of the questions, is, the the this the 2,000,000,000, digitization spend I think this is something that I'm I'm happy to clarify that this is obviously something that is baked into our guidance, short, and and midterm and part of the overall performance. So it's not something that comes, on top of or as a as a as a as a one off expense.
So that's all baked in there, but I think it's a very important signal to, the organization that we're serious about it to spend a total amount of 2,000,000,000 only on digitization. And from listening, what you said, I think they're shouldn't be much doubt that we are able the multitude of projects, that we're going to, harvest that and get to a very decent run rate of financial benefits over time. Great. Well, that's fantastic. It's bringing us to to the, scheduled full hour.
I think
we Perfect.
Covered all the ground. I hope we we covered all the questions, so it definitely took care of everything that you send in. So thank you very much. Thanks, Catherine. Hugely interesting and, gives you an idea that one could talk about this probably for for many days, if you're really dig into the
I definitely could. Yes. Yeah.
So, thanks for sharing that valuable. I'll I'll that's I hope it was useful also for you guys and to give you a better understanding on how we were steering the group, through, the the the next number of years. Okay. Very, very good. Thank you very much.
Looking forward to, seeing you on October 5th when we are running our our next tutorial, to then. Bye bye. Have a good rest of the day.
Thank you. Bye.