Good morning, everyone, and welcome to today's webinar, Optimizing Hybrid Platform Visibility in Financial Services with Softcat and Dynatrace. My name is Ebow Barnes, and today we have two speakers that I'd like to introduce: our very own Gary Hawkins here at Softcat and Martin Bradbury from Dynatrace. Before we get started, I'd like to remind everyone that you can submit questions at any time during the webinar using the Q&A tool that you should see on the Zoom navigation menu, and all being well, as part of the agenda, we'll take some time at the end to go through these questions and answer them for you. Please note a recording of this webinar and a copy of the slides will be shared with you. Following the event, there will also be an on-demand version of the event, and that'll be live on Softcat's website within the next week.
If you have any further questions following the webinar, please speak with your Softcat account manager, or you can email me, ebow@softcat.com. Without further ado, Gary, I'll hand over to you to start the webinar. Thank you.
Thank you. Good morning. Welcome. As you're well aware, today's webinar is on full-stack observability within the finance sector. For the ease of pronunciation, I will refer to it as FSO as I go, so don't be surprised. I'm personally delighted to share some of the insights on this critical topic that really is addressing the complexity of the entire digital environment. How do you maintain regulatory compliance, and, of course, how do you address the heightened cybersecurity threats faced by most financial institutions? We're seeing a growing importance of observability in the whole finance sector, particularly where organizations are recognizing the need to have more comprehensive monitoring end-to-end. How do you get better system reliability? How do you optimize overall performance, and ultimately, how do you manage risk more effectively and proactively?
Throughout this webinar, we'll endeavor to also provide some insights into how full-stack observability will help you achieve higher ROIs, improve system uptimes through predictability, and how to enhance further operational efficiencies. So really, whether you're a finance professional leveraging digital transformation within your business, if you're an IT leader or an exec in insurance and finance, or you're a DevSecOps IT professional, how do we help you optimize that? And how do we help you manage those resources end-to-end, whether it be the network, whether it be on-prem, whether it be on cloud, the edge, and, of course, end-user performance from customers? I think one of the things that sticks out for me as I've been speaking to customers over the last year, an important point about observability is for the front end of the business as well as the back end of the technical side.
This is particularly important. It is not just a technical purchase. It's not a technical tool. It is much more predictive in the analytics that can help shape your business needs and your customer experience. So really, aligning those technology decisions with business outcomes very much goes into the conversations with the right stakeholders. Many of you on this call may be responsible for certain tools within certain areas, but do you actually correlate that information into the business side, or are you talking to the network, the applications, the help desk, or the cyber team?
And the idea is, and I use this term, it's an old term, but it kind of makes sense, the mantra and mantras, observability is here today not to replace everything you currently have, but to help ingest that data, make it more meaningful, and helping you to detect those and resolve those anomalies proactively. So where do you start? In today's session, we're looking to help break down the isolation of those areas, those departments, those tools, help you prioritize and define and measure your success criteria based both on the business impact and further aligning IT's awareness over just the technical aspects. The language of the C-suite, the language of the business, and the language of IT quite often is a bit of a gap in there. And I think we're very much responsible as Softcat and Dynatrace to help bring that stakeholder management together.
I think the intent also is to help you plan and pre-deployment before you go live. It's a stealth deployment. Not every company will include observability across all of their departments in one go. So what we look to give you is three key areas: a holistic view, giving you a complete picture of the entire IT environment end-to-end, coupled with real-time insights. How do we enable faster, more accurate decision-making for the business? How do we give you the predictive analytics? How do we let IT do synthetic testing to make sure that things that could go wrong are recognized prior to going wrong? And then data correlation. And we see that in silos in many companies.
If you think about AI and data governance, it's the same here, just getting a handle on which data is meaningful, which data you need to ingest, which data you need to correlate. So really, just to kick this off, and I don't want to hold back too long before I pass over, to recap, there's three things top of my mind when I speak to customers. So for FSO, customer digital experience is key, and I covered that before, helping you get real-time tracking of user interactions using the right metrics, the right analytics, understanding the right performance of those microsites for customers coming into your finance services, and then even down to the website or page performance, geolocation analysis. Where are your users coming from? Does that highlight an area of concern? If there's degradation to the network, or how does that end-user performance look?
So we're going to give providing quantitative and qualitative analytics and ultimately focus on the customer experience. And I think really, from an IT standpoint, I think it was stated that about 60 appliances for every transaction across on-prem to cloud to end user is the typical. So the number of anomalies that could occur in between is quite high. So really giving you the quick identification from a database to an app to a website, if that's a three-tier architecture sitting on-prem, that's fine. But most of us are now sitting as a cloud first. So where does that alert need to be recognized? And what do we do even to be able to look at the last mile? So again, helping IT become much more collaborative and correlating that alerting system so that you're not seeing the alerts before they happen. And our second one, enhanced security.
As you know, prompt detection, regulatory compliance, never going to go away. I think we live in a world of when we get attacked with ransomware is one of the key areas, not if, and how do we help you stay regulatory compliant? And then ultimately, recognizing that IT teams and business are finite in the number of resources they have, and we're looking for technology to be able to help with that. And observability is one of the good use cases where AI does underpin it quite well in that predictive world. So how do we optimize your operations? How do we improve the overall security services and the quality of the services? And how do we help improve and resolve faster? So at this point, we're just going to bring up our first poll. So I'll give a couple of seconds before I switch gears and hand over.
So how many monitoring observability tools do you have in your organization today? So as I'm letting you think about that, I will now switch over to the second part. So I'm delighted to introduce Martin Bradbury. He's the Regional Director for Strategic Enterprise U.K. and Ireland at Dynatrace. And welcome to the call.
Thanks, Gary, and good morning to everyone. Pleasure to be with you. I run strategic accounts for Dynatrace in the U.K. Prior to starting this role, I ran financial services for Dynatrace in the U.K. as well, so I spent a reasonable amount of time in this sector talking to customers about their challenges around observability and looking at strategies to help modernize that. Just for those of you that may not know Dynatrace, just as a brief introduction, effectively what Dynatrace does. We're all about simplifying the complexity of your IT environment. Our platform uses AI to provide real-time observability across your tech stack from application to infrastructure, from mobile to mainframe, from mouse click to code execution at the back end.
What we do radically differently is we bring together all of that data into a single platform and then use AI and automation to give you answers from that data. So this isn't about data on glass and you guys trying to figure out why something's red or amber. This is about the Dynatrace intelligence in the platform actually giving you root cause of what's happening in your environment. This means you can proactively address issues, optimize performance, and drive innovation all whilst reducing the costs of monitoring and observability in your environment.
Okay. Well, thank you for the intro. So let's just jump straight in. So from a simplistic view, what are the business challenges that observability addresses within the financial services sector?
So I think if you think about the technology landscape that you guys operate in today, you have a high level of complexity. For most organizations, you will have an amount of legacy, and you will also be trying to modernize the business at the same time. So you'll be building new agile apps, and you'll probably have mainframe technology as well. So traditional organizations have been under increasing pressure in that landscape to deliver better customer experience against this level of complexity and against new market entrants that have come in without that kind of legacy to haul behind them. So the complexity in the financial services environment has not decreased. It's increased. And actually, it's way beyond the comprehension of us as individuals in many cases to really understand the complex flows and interdependencies of that environment. So alongside that, the regulatory landscape has also been evolving.
The cost and complexity of ensuring compliance with all sorts of regulations within an environment has really given organizations massive challenges to manage. And so if I think about the three things that surface in my conversations with customers, no great surprise, regulatory compliance. And I think probably the most prevalent regulatory topic in the observability space is operational resilience regulation. So things like the PRA SS2/21 in the U.K., DORA for those of you that are operating in organizations that are doing business in the EU market. The commonality actually across those types of regulations is they require you to identify your important business services, define the impact tolerance that you can tolerate in the event of an outage or issue, and demonstrate, importantly, that you can proactively monitor these services end-to-end.
And coming back to what I said earlier, that complete visibility mobile to mainframe is something we hear organizations really struggling to achieve because of the siloed legacy tooling that you have. I didn't actually see the poll results, but I'm guessing most of you will have put big numbers of tools in there. If you didn't, you maybe were telling a fib. Most organizations we go into have 15, 20, 50. I think 80 is probably the record I've heard for observability or monitoring tools. Improving digital customer experience. So that really comes down to what Gary spoke about earlier, which is looking at the user journey view of your technology and looking at it top-down rather than bottom-up from infrastructure. So this could be something like an account opening journey, a personal loan application.
Often the telemetry and visibility and analytics that an organization will have will be completely separate from the IT health and system health view of the world. So you might be able to say that your account opening journey is broken, but do you know immediately what the cause of that is, what the impact of it is? Are you able to proactively reach out to your customers if it's an existing customer journey and say, "Hey, we know you've had a bad experience. We fixed that, and here's something to kind of compensate you for the inconvenience." So it's really about applying a business lens and using observability to focus from that perspective. And the final point, and I sort of touched on it, I guess, in the first point, was operational efficiency.
Your bottom line in financial services is as important as it always has been, arguably more important. We hear consistently that we have these silos of data. Teams have gone and built out their own monitoring capabilities to suit their own purpose within the world that they're responsible for. Being very vanilla network team, database team, cloud team, right? We kind of hear, the cloud team use something different. The database team use something different. And actually, that's all well and good in the isolation of the world that they're responsible for. But actually, you don't deliver services to your customers with a single point of technology, right? You're going to have to have you need a view across that whole stack to understand how that's performing. The second part of this is, frankly, cost efficiency.
So if you're paying for 10, 15 individual tools, you might not be paying necessarily huge amounts for them individually in isolation, but I guarantee that that number is a significant impact to the business in terms of cost, but it's also an overhead in terms of management. And when you get an issue, trying to sit 15 subject matter experts around a table or a war room with 15 different views of data, arguing about whose data is right or wrong is really actually very cost inefficient, particularly when your customer environment might be down, you might be losing revenue. So that's kind of the three things that I hear regularly, Gary.
Thank you. And I think it goes back to when you measure the number of monitoring tools companies have, and then you divide it by the number of departments, how often are they really collaborating and how much correlation of that data. I think as I've gone through my industry in the last 30-odd years, application maturity seems to be the top of my priority to giving guaranteed business outcomes. And then rightly, as you mentioned, data integration and correlation. And then really, to me, the third most important thing is the culture and the people and the resources helping bring them together. And that's where observability really measures its success. So we have another poll. I'd like to bring that up now before we go to the next question.
So just for those who are hoping to get back to it, it's "Which aspect of full- stack observability do you believe would likely be of most interest?" And obviously, you have the pull-downs to take the questions. Okay, so while you guys are doing that. Let's keep moving. So Martin, what are the goals and critical considerations for modern observability strategy? What should they be thinking about? What should they be doing first?
Yeah, sure. So I think for me, we will quite often go into organizations that are thinking about this from a technology up perspective. And what I mean by that is they'll often start with the problem statement, "We've got too many tools. We need to sort that out." And whilst that is a valid problem, it probably isn't the most strategic point of view from a business perspective, and it's probably not going to get you access to the funding that you need to do that piece of work. So looking at the slide, where I think we need to go is we need to have a clear vision for why we are looking to be or do something different with observability. And that leads to what business outcomes are we trying to get from the strategy? Are we going to look at delivering a superior customer experience?
Do we have a board-level KPI on customer satisfaction or retention? Are we looking to optimize our infrastructure costs? Are we looking at how observability can play in sustainability? The thing about observability, which is a really interesting space, makes it an interesting space to be. Sorry, it's so intertwined in the IT environment. The context you can spin on that data is almost limitless. So one example was for around sustainability goals, working with Lloyds Banking Group last year. There's a public case study about this. It was released around now. We actually built a Carbon Impact app. So because we're embedded in the infrastructure, a physical layer, or the cloud layer, we can see what that consumption looks like. It wasn't a big stretch to bring in the cost and carbon impact of that environment to give the organization a view.
And that's something we made available to all our customers. So yeah, that's just one example of kind of how observability is kind of really into the fabric of your organization. So when you look at the business outcomes, you then need to look at, obviously, what is the cost benefit for that? And most organizations will focus on the hard cost benefits, such as retiring tools. Great. That will probably make your project to some extent self-funding. But the transformation benefits you need to think about as well. So if you're able to reduce the number of severity one major incidents that you have by 50%, and you're reducing the mean time to resolve those incidents by 80%, so MTTR, common phrase in the observability world. And that's a typical stat that customers would see, 80% reduction in time to resolve because we're pinpointing root cause.
How do you quantify the business impact of that? And how do you quantify the productivity gain? And can you associate a financial benefit to it? That's really important because that's how these observability programs become C-level board agenda items. I've been in the industry for a while, and Dynatrace is probably the company that we get access to senior-level people, and it's typically the C-suite that are sponsoring these initiatives because of the business outcomes it can achieve. I think it's also really important to see the opportunity to transform and to do something differently. So not just recreate what you've always done with the tools that you have. And that, again, that's a common conversation, right? We've got 700 dashboards. Can you recreate them in Dynatrace? Frankly, yes, but no, you shouldn't be doing that. So this is an opportunity to do something differently.
So it's how do you use that AI capability to get precise answers from data, pinpoint root cause, and route that issue to the right group to fix it quickly? Essentially, even moving to auto-remediation capabilities, right? So being able to fix those incidents automatically without human hands touching them. So again, it's kind of what is the opportunity here to work differently and to be more effective? If you go and deploy any modern observability tool, Dynatrace is no different, and try and force-fit it into the way you've worked for the last 10 years, you're not going to get the value from it. You might as well stay where you are with the tools that you have. Another area of consideration that has not historically been on people's radar is the use of observability within test and development and in your software engineering teams.
Often, organizations look to the cost of observability around, "We've got to do that in tier-one production systems." And actually, it's a false economy. If you can fix an issue in non-prod before it gets to production, the industry stat is it's 100 times more expensive to fix when it gets into production. And I think we can all agree that fixing an issue before it gets into production earlier in the cycle is a good thing. So actually having observability deployed into those environments is really valuable. And the other part about it is now, because we've moved to consumption-based technology, you don't have to have those development environments to probably spend down at weekends and times when your teams aren't working. Exactly the same for observability. You're only going to consume observability software, at least from Dynatrace, when that development environment has been running.
We've got customers that will down their observability levels over the weekend in non-prod, so they'll reduce it to the lowest possible lowest cost level just to make sure it's cost-effective to do so. The second angle relating to this in non-prod, and the reason I'm talking about this is because it's probably one of the hottest areas of discussion, I would say, in these strategic conversations, is you probably spend a lot of money in this industry either on software developers you employ or on third-party resources to develop software for you.
If we can deliver high levels of visibility into our development teams and our engineering functions, and we can give them the ability to pinpoint where issues occur, what the performance of their code is, how that's going to translate into production, potentially gating the software development life cycle so to stop poorly performing releases getting out, that's going to drive up a huge amount of productivity. So we don't want, or I certainly wouldn't want, if I were paying for them, highly paid, highly valuable development resources sifting logs to try and debug an issue, right? What I want is them building differentiated functionality that helps my business compete in the market, and that's a big focus area.
So if I sort of wrap all this up and put a bit of a bow on it, I would say hard cost savings that can be achieved through consolidation, operational efficiency savings through things like reduction in number of incidents and MTTR, user adoption goals. So you've paid for an observability platform. Are your teams consuming it? Are they using it? And are they getting best value from your investment? Ensuring that I'm leveraging the modern capabilities in the platform. So automated root cause, AI, remediation, auto-remediation, and use of observability to make my developers more productive and to reduce the number of defects making it into production.
Yeah. It's interesting because a number of the calls that I'm sure you've had and I've had is a lot of customers have all the right tools, actually, but they've only used a fraction. I shouldn't say all the gear no idea, but essentially, that gap between the correlation of the network to cyber to help desk to applications to IT out to the business is very often helping them make use of what they have, and then using observability is a good way to then leverage all of that information. So yeah, it makes perfect sense, and obviously, a well-trodden path and process. So I'm just going to go back to the outcome of the second poll. It's quite interesting, actually, because I'm delighted to say enhancing customer experience is number one. That was 36%.
I'm going to say I'm even more delighted that security and compliance came under it because I'm sure that most people on this call feel quite reassured that they have the right practices, the regular compliance, and the security governance. That came in at 27% and matched with increasing operational efficiency, so 27% for security, 27% for operational, 36% clearly the winner for customer experience. I'm not surprised about modernizing legacy systems. I think most finance verticals have got a cloud first or certainly have funded the right path, and then for cloud migrations, at that point, they're integrating new services and features to make sure they adopt well, so makes sense. Okay, so what are the barriers to success? What should customers be aware of apart from hearing the good news? What do the customers need to think about before they jump into observability?
I think, actually, just touching on those poll results just briefly, I think it's really interesting because if you almost stack ranked the result in the way that it came in with what's most relevant to the business and what's more relevant to the board in your organization, I think it's absolutely how you would order that as well. So it's good to see that there's a good alignment between folks we've got on the call and what we see when we go into those organizations. I think historically, deploying observability, particularly at an application level, has been deemed to be difficult. One of the things I think we've done a fabulous job of as an organization with our product is to make it super easy to deploy, automatically inject into your application code, all that good stuff.
So actually, I think the technical problem to my mind has largely been fixed. That problem is much less of an issue than it used to be. I think the biggest barrier to success in most of these organizations, it's why I kind of focused this slide, was it's how you go about implementing the change. It's how you go about running the platform and how you enable the business to consume the platform. So I think historically, and still today, many organizations have and do have a centralized monitoring team who kind of configured, deployed, have actually probably monitored the output from the monitoring tools. And I think as the technology landscape has broadened, it's been very difficult for central teams to keep up with the demands, keep up with the skill sets required.
So if you think about the span of kind of that mobile to mainframe, I'll come back to that because it's so relevant in this space. It's beyond, I would say, even the most well-staffed and funded monitoring teams to really have every single one of those skill sets within them. So it kind of moved away a little bit, I think, from that in organizations that are doing this really progressively and really well. And it also, I guess, it comes back to a symptom of why the tool sprawl started in the first place because the cloud team probably felt that the organization maybe didn't understand their specific needs and went out and bought something or built something to suit. I think, as I say, a modern observability strategy in a large enterprise requires a more progressive approach.
We work with many of our customers around building this observability practice and sense of excellence, and actually, what we recommend, and we always try to help our customers implement, is really a COE that has ownership of the platform and it owns the strategic direction of the platform, but ultimately, it's an enabling function to allow the business to come and consume that, to allow your application teams to come and consume on a self-service basis, and I think that for some organizations is a step, right? It's a relinquishing control to a certain degree, but I think where we saw, where we see the most success in the organizations that do this well is where they've really embraced the observability practice model and built a COE.
I think the other area, and if I just refer to an example, large building society in the U.K. that we won as a new logo customer about 18 months ago. They had three key tools they were looking to replace and consolidate. Actually, historically, they'd had quite a traumatic deployment of some of this technology. It had been quite difficult. It had been very manual. It had taken a long time to get the business to buy into the change. Actually, they took this approach where they built the capability, and they brought the business along for the journey to work with that tool set. I think that was proved to be super successful. They delivered the project ahead of schedule. Everyone felt like they got a good outcome from that.
I think that's just, we've got multiple examples where that's worked really nicely in these sort of financial services customers. I think another critical area, and it's probably the number one, actually, I probably should have maybe done it in the other order, was executive sponsorship. If you're going to do something enterprise-wide with observability, you will need to have a significant amount of change, right? You will need to persuade some people to relinquish the tools that they have. There's a lot of tool hugging goes on in these organizations. You will need to have people that are very attached to their existing ways of working and might not really understand what a modern observability platform can deliver for them.
So you need a senior executive behind the strategy who can connect what you're doing at a deployment and implementation level back up to the strategy for observability, but also what are the business outcomes in the company strategy that we're looking to achieve through doing this. So critical, really. It's 100% got to be present to be really successful at doing this.
Yeah. So centralizing the monitoring teams across the stakeholders is critical. Actually, when I started talking about this over a year ago, when it really started to take shape, everybody wanted their own personal dashboard. They wanted the bells and whistles. It was Starship Enterprise as it's always been. But actually, it's quite interesting. It's the opposite. When you get really good data and you've correlated, you've got predictive analytics, why would you let all the departments know what's going on? You only need to let the departments where it impacts or something that they need to do and put advisories. And that automation and that predictive flow seems to make more sense now. So it's not overwhelming and then underwhelming with the ability to do something. It's like making sure they've got the right data at the right point. So that telemetry is pretty key as well. Good stuff.
So how do organizations get started? What do they need to be thinking about and doing first?
Yeah. So I think if we kind of play back maybe what we've spoken about over the past 10 minutes, having a clear strategy, having the right business case, having the right organizational structure and mindset, plus the executive support is an amazing foundation for success. I think we just had a slide advance automatically. Always check for slide build, folks. One of the first things I think to consider is your ecosystem. One thing I think that is very clear to me about how Dynatrace work with our customers is we don't exist in isolation. We integrate within your ecosystem. Probably one of the most important things, and it's typically the ITSM platform where the rubber meets the road here, is making sure that we can integrate and use that to drive automation.
What we're talking about here is something far more sophisticated than Dynatrace sending alerts to ServiceNow. What we're talking about actually is really making sure that the problems that Dynatrace identifies, routes to the right teams, that we can trigger auto remediation actions, and that maybe we're using Dynatrace discovery data to populate your CMDB. So that, I think, is absolutely critical. Making sure that the Jira integrations or whatever it is that we have to be part of, the GitHub integrations are all there to make sure that we can do what it is that you need us to do. Kind of the next thing I think to think about is deployment and coverage. So ideally, you want the broadest deployment of observability you possibly can to get the coverage. If you can't see it, you can't observe it.
If you can't see the traffic flow, you can't see the application flow. So whether you're going to use an agent-based collection methodology or whether you're going to use OpenTelemetry or a hybrid of both is kind of up to you on a workload-by-workload basis. But for me, getting deployment out there quickly in a cost-effective manner. So again, traditionally, one of the barriers to observability has been Full-Stack Observability was deemed as an expensive solution for everything, right? So not every workload is critical. Not every application is of equal criticality. So that's why we have different agent capability levels now with different price points associated with them. So you can deploy very broad and very wide at a very low-cost model and then uplift your high criticality applications to full-stack where you need to. Bringing in the relevant business stakeholders is also really important.
So again, where that customer experience is right at the top here, you have to bring in the application teams and the functions that own those end customer journeys in early to give them an idea of what's possible. We regularly will have one or two business functions that become huge champions within your organization for the deployment of observability. And once they've showcased what they've achieved with the platform, you will quite often find that other people will go, "Hey, what's that? I'm really interested in that. How can I get some of that for my own business and for my own application?" So that's important. And one typical way we often work with customers is around a POC. So running a POC, I think, is important because some of this stuff you have to see. The marketing slides are great. We've got great marketing slides. They're very glossy.
The story sounds good, but actually seeing this stuff in the real world is actually really, really powerful, so we often run POCs every two to four weeks, and customers are really amazed by what insight they get from that and what they can see from how the platform will support them and help them with their strategy going forwards.
Good stuff, and I think the proof of value or the proof of concept absolutely is on seeing customers. We've got to get it off the page and into their live environment and what actually can they see. And everything's in cycles, so they have the busy periods and they've got application deployments. Unless they can actually test it, it's not really going to be as meaningful for their business. And again, that's the other thing about the stealth deployment, getting the other departments to see the value of when department one has taken it on and what it could do in the future, so it makes perfect sense. I think we're almost at the end, Martin, so thank you so much for giving some insight. I'm going to pass back now to Ebow. See if we've got any questions that we need to take before we wrap up.
Yeah, thank you for that. We received a couple of questions, which we should be able to get through now. I'm not sure who this is for, so I'll let you fight over it yourselves. The first one is, how can Full-Stack Observability help us improve our compliance and security posture in the face of increasing regulatory requirements?
Shall I go to that one then, Gary, I think?
I think you'd be better placed than me.
I think we took some of the compliance aspects of it, but I think kind of having that end-to-end view, being able to look at an important business service or a critical customer journey, or even right down to measuring the Open Banking SLAs and reporting back to the regulator on those. So there's lots of things that we can do from a compliance level because we see those user interactions. But I think security, which we've not really spoken about extensively this morning, is another really interesting one. It comes back to that point of context. So because of where we are embedded into your platforms as an observability platform ourselves, we can see things in much more granular context than often you can with other things. So as an example, we have a capability called Runtime Vulnerability Analytics.
So, i.e., do you have code libraries in your environment that have vulnerabilities? Now, you can get lots of tools that scan for those vulnerabilities, and we'll give you an immense long list of libraries that might exist in your world with them in there. But what actually we can give because of where we're embedded is we can say, "Well, actually, we are deployed sorry, that vulnerability is actually present in a customer-facing web application with a database attached to it and actually is being loaded into runtime versus a vulnerability that's just sat there not being loaded into runtime and arguably there's zero exposure for." So we can help you prioritize remediation. That's just one example. Another example is security posture management capability. So actually running automation and automated checks around key compliance metrics in your infrastructure and technology environment to give you a dashboard.
So if you have a look on our website, we've built a DORA dashboard specifically for customers in that space. Clearly not going to give everything for everyone for DORA, but actually it's a good lens on what's my security posture in specific environments and running regular checks, etc., and making sure you can see that those are happening automatically.
Thank you. The next question is, "We're in the process of modernizing our legacy systems and moving to a multi-cloud environment. How can full-stack observability support this transition and ensure we maintain a high quality of customer experience?
It was interesting because in the poll, whoever that question came from wasn't actually cited. But as I know, talking to customers, legacy could be mainframe. It could be last-generation server technology. It could be going from one cloud to another. So I don't want to pigeonhole legacy systems as the old server sprawl to cloud because we have both now. I mean, now we have VM sprawl. So why is it important? Because, as I said before, having multiple monitoring tools for on-prem to cloud versus alternative clouds, it's pretty clear as we speak to AWS and Azure and GCP very often, why would they make the feature functionality of that monitoring and that accessibility from one cloud to the other? It's their business. It's their core business. They all have their preferences in the way that you manage cloud deployments in AWS versus Azure.
So there is that correlation of information that quite often is difficult to manage. And we don't expect customers to hire more and more staff just to look at the nuances. So that modernization starts, as I said previously, application maturity, making an agile workload move from one cloud to the other is modernization. It's not just taking on-prem servers and then moving it to a cloud. So how do we help with that? How do we look at the data coming from cloud AWS, from cloud Azure, and how do you bring business continuity DR strategies? So when you think about that modernization, it's everything. What changes from a security posture? What changes from a networking? What changes from an application deployment? And how does that move to multi-cloud happen?
As Martin's quite rightly said, customers looking at the business angle, whether you're looking at ServiceNow, whether you're looking at Snowflake, whether you're looking at Databricks, there's so much feature functionality analysis that could be done per department, but bringing that together into simplified dashboards, I think also helps. I think that's where observability is not always spoken about, but it's becoming more prevalent as we look at the use cases. We can definitely help.
Yeah, I think just to add sort of my perspective on it, we've entered into two or three customers just in my team over the past 12 months where it was really a cloud transformation was the reason for the review of their observability strategy and the reason that the project to some extent got funded because that kind of abilities have a very clear lens on what the performance of that user journey was before it migrated and transformed into what it is when it's moved is one lens really important. The other is to de-risk the migrations, right? So it's not unheard of for these things, particularly if they're a lift and shift migration, as a lot are in the short term. Something gets lifted, something doesn't get shifted, right? So it's kind of using the platform to be able to see, well, what are my interdependencies?
Have I got something in the cloud that's calling something, a piece of infrastructure back in the data center that I'm about to power off? It's that type of thing. I think any form of transformation is key, and actually, that kind of, again, many organizations are not going to have a 100% cloud environment. So you are going to maintain that kind of level of communication and transaction through from cloud to on-prem. I'll give you a customer example. We were talking with a new customer about observability, and they were looking to do something with a new app they were launching, and they had to go and point out to the app team that actually they had a monitoring tool within the app, but their app basically did nothing if it didn't connect to a whole host of back-end systems that they couldn't see.
The success of their release, their application code at the front end might have been perfect, but if the downstream environment's not working and not performing, you can't see that. Your release is going to be a failure. That's kind of my two cents on it, I think.
Thank you.
I've just finished, Ebow. We talked to customers about data governance. We've talked about it today and how we can correlate. I think we're moving so fast. We're now into AI governance, and as I said in the beginning, it underpins observability pretty well, so it's helping customers modernize their data as well as just their systems, and I think, as Martin's rightly said, it's all of the above all at once, so yeah.
Thank you. Martin, I know this one is definitely for you, although, yeah, I'm not going to exclude you, Gary. So it says, "For the sake of business owners, what other sector other than finance do Dynatrace have the most synergy with?
So we work across many, many sectors, which is a bit of a cop-out. I'm sorry, I appreciate it. So let me try and make that a bit real for you. I would say any organization that has a high level of digital interaction with its customer is going to be a very obvious prospect or very obvious organization that could benefit from what we do. So retail, government agencies that have a high amount of interaction with end users is a very big sweet spot for us in the U.K. and elsewhere. Manufacturing, automotive, there's a huge amount of relevance. I would say any organization that has a level of complexity in their IT environment and needs to provide good, robust, high-quality digital services is someone that should be considering having a chat with us.
Thank you for that. Okay, so there's no more questions come through, so that's it for today, and thank you to everyone for joining and staying to the end, and we hope we've provided some extra insight into what I'm sure you can agree it has been a superb session, and our speakers have been great. As we said earlier, if you've got more questions, please do feel free to get in contact with your Softcat account manager. If you don't have one, feel free to contact me. My email is ebowba@softcat.com. We'll be sending out a copy of the recording and slides, but you'll also be able to watch it back on demand once it's live on the website. There will also be a post-webinar survey that will pop up after the event.
So you can let us know how you found the content and would love to get your feedback. Thanks, everyone, for joining. And thanks to Martin and Gary for speaking on the webinar today. And we hope you have a great rest of your day. Thank you.