Good morning, everyone, and welcome. It's Blair Abernethy here, Software Analyst with Rosenblatt Securities. I'm thrilled to have Pega back with us again to our fifth annual AI conference. Joining us is Don Schuerman, CTO of Pega, and we've also got Ken Stillwell, who is CFO and COO , been with the company for a number of years. Welcome, gentlemen.
Nice to see you.
Hey, Blair. Thanks for having us again.
Listen, this is a product AI-focused conversation, so I've got a number of questions for Don. I'm going to pull Ken in a couple of times more for understanding value propositions or cost structures or the AI implications to the revenue model, if you will. We're not going to go through the quarter. We just did the quarter. You guys had a great quarter. The year's been ticking along quite nicely despite all the macro problems. With that, Don, why don't I just have you kick it off here and just give us a bit of a high-level overview of Pegasystems right now? Sort of what are your core end markets and sort of the problems or challenges that you address for your customers?
We really focus on providing a platform for our customers that drives high levels of transformation in their business. That focuses on transforming some of their legacy. I think we're going to talk a little bit about technical debt and the impact there. Transforming the workflows that manage their operations and open up, I think, a lot of opportunities for increased efficiency. Transforming how they drive service for their customers, whether that's traditional contact centers, but increasingly through agentic and self-service kind of channels. Then transforming the way they engage and market to their customers to move from more traditional kind of spray and pray marketing solutions to being able to use AI to be highly personalized inside of every conversation they have.
Your platform is also, it's fairly generic. I mean, it can be used across a wide range of applications, right? Maybe just at a high level, you know, what are your core end markets?
Yeah, our platform is really an AI decisioning and workflow automation platform. As I like to describe it, it helps organizations make decisions and then get the work done. We really focus on the enterprise where they have to make these kind of pretty sophisticated decisions and manage work that often crosses multiple organizational silos, works across multiple systems, often in the backend. We tend to target places like financial services, banking, healthcare organizations, insurance. We do a lot of work with the federal and also state and local government, telecommunications. Those organizations and a lot of the workflows that we do tend to focus around the end customer. How do you onboard an end customer? How do you service an end customer? How do you resolve exceptions when things go wrong for an end customer?
You have some pretty large deployments, right? We're talking, you know, tens of thousands of transactions going through your system.
Yeah, it's not uncommon for us to be in systems that are processing tens of millions, what we would call cases or workflows, over the course of the year. When we're doing AI decisioning for some of our large customers that use this to figure out how they have the right conversation at every interaction, we're talking about billions or tens of billions of interactions happening in real time.
Yeah, yeah. Obviously collecting a lot of useful data that can be used or repurposed for other technologies, such as AI, right?
Yeah, we want to be really careful. We're not in the business of harvesting our customers' data. We are in the business of giving them a platform, right? What we want is our clients to be able to then take that data from how they've interacted with their customers or how they've managed their workflow and use that to drive the kind of continuous improvement loop so that they are continuously getting better and more targeted, for example, at the marketing conversations that they have or more efficient in how they predict the way that their workflows are going to end up so that they can actually drive better efficiency and effectiveness into the workflow engine.
Yeah, I think, Blair, kind of the theme, you think about the two sides of this. One is if you have billions of transactions or even millions, the level of automation and the level of value that AI can provide to be able to streamline, reduce human interaction to only when necessary, is very critical. On the other side of that, the amount of information you can gather from the patterns that exist. The actual data is not that important, meaning the name, the transaction, et cetera. That doesn't really matter. It's the type of thing that happened. How often do people ask for address changes? When do they ask for them? Why do they ask for them? What do they typically, what's the lag? What other things happen that might be associated with events with clients?
How can you predict those and then reinforce and improve the customer service engagement that you have using AI, both in the analysis but also on the front end to drive a better customer experience? That's where it's so powerful, what AI has given us.
Yeah, that's interesting because it's not really necessarily, as you said, it's not a name, address, telephone number in a database. It's actually the processes that have occurred around that to result in this case happening.
Right. How often do people change their address? How often do people add other people to a credit card? It's important to understand the frequency of the different types of things that you see.
Yeah, interesting. Don, just back to your last user conference a month and a half, two months ago, you guys were talking more about IT technical debt and the reliance, how the reliance on legacy systems makes it a challenge to adopt AI. Maybe tell us your perspective there.
We have some real data there. We went and surveyed over 500 executives at enterprise organizations. What we saw was that 88% of them felt that the technical debt they have prevents them from having the kind of agility and responsiveness they need in their systems, right? I think if we know anything in today's market, in this economy, being able to move fast and respond to changes is pretty darn important. 68% of those executives said point blank that legacy debt and legacy systems were preventing them from implementing and getting the full value out of AI.
As enterprises really think about how they're going to integrate the power of whether it's large language models or more traditional classical machine learning into their business so that they can drive more efficiency and deliver better customer experience, a prerequisite to that is getting their business logic and their data out of some of these legacy systems. We think there's a real huge urgency inside of our client base to do that. Now powered by some of the generative AI tools that we've brought to market like Pega Blueprint, we think we have a really unique opportunity to help accelerate our clients right at this moment of need.
Blair, one of the things that Pega's done for decades is this concept that is, you know, we use, but is used in the industry as kind of a wrap and renew, which is we would go in and we would let a legacy system sit as is, and we would create a different UI, manage some of the workflow even across maybe a few legacy systems to try to improve what was otherwise a terrible experience for our clients. What Don's touching on, that isn't enough now, right? These systems cannot be leveraged in AI. They're many times unsupportable or in very dire need of support. They're sitting on, you know, legacy environments, whether that be, you know, maybe that be custom development, maybe that's COBOL systems, maybe that's mainframe systems. People are no longer patient.
Our clients at the survey that Don just mentioned, they can't tolerate this rapid renew. They need to do real transformation. That's what Don's touching on.
They can't be spending money on just keeping the lights on on these systems anymore. They need to be able to free that budget to drive true transformation. That means, in many cases, completely rethinking how they run their workflows, how they engage with their clients. Moving off of that legacy system both allows them to move faster, but it also opens up the IT budget for them to be able to put it to the things that drive the real transformational value, which is where they need to be.
This year, one of the ways that Pega is doing, helping them to transform is with Pega Blueprint, right? Maybe talk a little bit, Don, about where is Blueprint at today in terms of its capabilities and, as you've seen it, it has had pretty good adoption over the last two years. Where does this thing go? How sophisticated does Blueprint become?
Blueprint has been the fastest adopted product that we've brought to market, just in terms of the rate at which our clients have been able to come on board and use it. It's pretty amazing that for a product that's essentially about 18 months old, how far we've been able to push, I think, the capability. For example, we just introduced some features in Blueprint that allow a client to literally take a movie. Imagine you sat down and, you know, you can do screen recording of a laptop. Say you have a mainframe system. You can sit down and do a screen recording of somebody using that mainframe system, narrating and explaining what they're doing. Upload that into Blueprint, and Blueprint will extract from that recording the workflows that are in the system.
It'll look at the screens and figure out the data model by looking at just the fields that are on the screen. It'll figure out what the user outcomes are, what somebody's trying to drive, and then it'll use all of that information and combine it with industry best practices that we've developed, industry best practices that it can find on the internet. We've given our partners and some of the GSI partners the ability to inject some of their best practices into Blueprint as well. It'll use those best practices and what it understood of that mainframe system that you uploaded in the video to design and lay out a whole new application, new workflows, new data models where the interface points need to be. I can literally, in a couple of minutes, be clicking through what my new application experience will be.
A mockup of it, yeah.
Yeah, a fully running mockup with synthetic data, with dashboards. The running mockup even has an agentic chatbot built into it that you can pick up the phone and call and talk to in any language, right? The scale of what we've been able to do when you combine what Pega already had in terms of a really powerful architecture that worked across systems and front ends, as Ken talked about, a time-tested improvement structure for managing workflows and decisioning at scale, is significant. You use the ability of generative AI to synthesize information about existing legacy systems and best practices and inject it into that structure. What we're able to deliver and show to our clients in an initial meeting and an initial conversation, I personally find pretty mind-blowing.
Interesting, interesting. How have the existing customers been adopting this? Are they really using this to, are you seeing any, as you said, it's been 18 months, are you seeing any impact in terms of, I don't know, a longstanding bank customer or insurance company customer? Are they starting to create more new workflows?
Absolutely. I mean, the story that I keep coming back to is we had one of our longstanding telecommunication customers, Vodafone, on stage at our user conference. Vodafone has adopted a corporate-wide mantra where they say, "No sprint without a print." A sprint is just a rush at doing software development. It's just a timeframe, two or three weeks of software development work that you're going to do. A print is Blueprint. Basically what they've said is they don't do anything without doing a Blueprint of it first.
Interesting.
That mindset has been driven because they've seen real results. They were able to take an application, an actual enterprise set of workflows that they needed to use from concept to live in 40 hours, right? That kind of responsiveness, that ability to respond to change and not just move fast, but actually do something that is really good and impactful and meaningful for the business, that's what's driving other enterprises across our client base to really take this on and inject it into how they think about the new workflows that they're building and increasingly how they remove some of that legacy debt that is acting a little bit of an anchor on their ability to drive AI innovation.
You know, what's interesting, Blair, is what is both amazing and is a challenge to us at Pega at the same time is the way that B lueprint engages with you to actually design your application. It is so advanced and mind-blowing in terms of what it can do. It really is. The challenge is that it is so different than the way organizations are used to building with post-it notes and Visio diagrams, et cetera. There's a change management process that needs to exist in the industry around leveraging AI tools. I think we will get there, but naturally people are stubborn and they go back to patterns and they get used to doing things a certain way.
That's why we are putting Blueprint in the hands of our partners, in the hands of the hyperscalers, in the hands of our clients, in the hands of anybody that wants to come to pega.com and see the experience because it really is, I mean, it's very analogous to how ChatGPT is made publicly available to really encourage adoption and enabling people to use new technology. I think that's a challenge for us, right? We want to try to help people rethink how they're supporting enterprise applications, even though in many ways they don't, they don't fully want to because they go back to their old habits, right? That's this great opportunity for us and also the mission that we have at Pega.
I think the opportunity, as Ken kind of hinted at, is a big change, but it's much easier to drive a big change when people can literally put their hands on it and do it themselves, right? The fact that anybody can go to pega.com and try out Blueprint—in fact, this might be a weird thing to say during an investor call, but I would encourage anybody on this call who's really interested in what this thing does to go to pega.com/Blueprint and try it out because I think not only will it help you better understand some of the things that Pega is doing, I think it provides a really good vision. Forrester has been talking a lot about what they call AI app generation platforms.
When they go around and they talk about it, they actually include a screenshot of Blueprint as an example of what this future of AI-powered app development for the enterprise looks like. I think it's a really good way to experience where I believe the future of application development for enterprise software at our clients is going.
I want to ask you, Don, because we were talking about this just before we got on the call. A lot of concern is a very rapidly evolving technology. We all know that. It's a horizontal technology. We all know that. The question is, how does the Pega platform fit into the new agentic world? If this is where we're going to be in the next few years, does Pega just get obsolesced and sideswiped by somebody else building agentic applications, or do you become a core that's used even more than the past? How do you fit in with the bigger AI world?
Our architecture, I think, has set us up pretty uniquely for this moment, right? We've demonstrated, and without anticipating necessarily large language models and the rapid rate at which they've developed, we've been working in the AI space for well over a decade now. We've seen and known what has been coming in terms of machine learning and the ability to take data and drive better predictions, whether it's into customer next best action and decisioning, whether it's into process optimization.
As we've built out the structure of Pega, we've really designed the architecture and the underlying structure that captures the elements of an enterprise application, the workflow steps you have to complete, the decisions you have to make, the places in which it needs to interface with data, much of which will not actually live inside of Pega, but will live in some other system, either a traditional system or increasingly a cloud-native data fabric like a Snowflake or something from AWS or Google. We've also built Pega from the ground up to assume that we're not always going to be the front end, right? Ken mentioned earlier that many of our clients, the front end into a Pega workflow is a Salesforce Lightning screen, or it's a customer self-service screen that's sitting on their website. What that's allowed us to do is a couple of things.
One, it's allowed us, because that structure is so complete and powerful, to build a tool like Blueprint that actually uses AI to inject business logic directly into that structure and get you to a running app that isn't just pretty, but is actually enterprise-grade and enterprise-ready in minutes, right? That's a unique advantage for us, and that's why you're not seeing other companies and other vendors with tools like Blueprint. The other thing that's set up is it has allowed us to plug into this agentic world, both using the agents at design time, because Blueprint is an agent. Under the covers, when I mentioned that Blueprint is reading that movie and figuring out what Blueprint is doing, it's actually running a bunch of agents to figure out what's inside that movie.
It's sending agents off to look out for best practices. Blueprint is an agent. The great thing is because those agents run at design time, some of the downsides of large language models, fears about hallucination, the fact that they don't give you the same answer consistently, when you actually apply it at design time, that kind of creativity and a little bit of unpredictability and out-of-the-box thinking is actually a good thing.
It's valuable, yeah.
It's valuable, right? We've harnessed it for a good thing. At runtime, Pega has the workflow structure where we can plug in either our agents or somebody else's agents to ensure that at runtime, when you need predictability, when you're a bank and saying, "Hey, we're investigating fraud. We have to follow these steps. We can't make it up as we go. We actually have to follow the steps." We've got the perfect architecture to help either our agents or somebody else's agents follow those steps in a predictable and repeatable way. That's going to be absolutely essential as enterprises try to deploy this stuff at scale.
You know, Blair, it's an interesting kind of parallel point to what Don's talking about. If you think about the way a model works, a model can become more precise and more powerful if you actually give it proprietary content around the thing that you're trying to solve, right? If you just went to a public model, asked it a question about something that it didn't know, because at, you know, at Pega or whatever company you worked at, you actually had information around your process flow, information about your risks and how you're trying to manage them, the model will be that much more powerful. Parallel to that, or analogous to that, is imagine if it had the workflow in its hands. Imagine if the agent at runtime actually knew how to execute the work, knew what the steps were, knew all the pitfalls and the things that could.
It's interesting because if I said to an investor, "Do you think it would be valuable to give the model relevant content to make it more smarter?" I think everyone would say, "Of course. Why wouldn't you give it a workflow to actually tell it how to do the work?" I mean, I just think it's interesting on this concept of disruption or obsolescence or replacement or competition. It's really very similar to giving it more information so that at runtime, the agent can be that much more efficient, can get the work done exactly the way it needs to get done, and reduce the risk. It is a big risk of unpredictable results, unpredictable process steps, not being able to know how the work is going to get done is a very, very big issue.
The way to do that is to manage it using the workflow, the Pega workflow.
Yeah, it's interesting. I think the fact that you apply Blueprint at the design time is key. As we've seen in other areas in the design software space, that unpredictability, probabilities actually add value because you end up with a solution that might be outside of your initial, you know, what the initial designer was looking for, but then it sparks that higher value.
It's not innovation. It's actually another level of innovation, right?
It's another level, yeah.
Yeah.
Ken, can we talk a little bit about revenues and costs with respect to AI? How does Pega monetize, you know, AI, whether it's, you know, large language models or agentic technologies? What, how do you, you know, what are the costs? How do you absorb them? What's it sort of look like from your CFO hat on?
I'll hit the cost side first, and then I'll talk about the monetization model for us. What's really amazing about all of the models, and quite frankly, the proliferation of models has created a significant amount of efficiency, even at the scale we're at now, which is probably nowhere near the scale we're going to be in three years. There's a significant amount of efficiency in the cost to deliver and execute the AI models because there's a lot of them, and they all need to be competitive, and they all need to manage the cost of their models. I don't know if I'd be able to say that if there were only one model, right? I do think the economic competitive pressures of multiple models is a huge leverage point.
The second point is the infrastructure build-out of the capacity to run all of the AI models, these large language models, is helping to keep up with the volume. I think from a cost standpoint, we're not, that's not a concern of ours at all. I think the models actually are running very reasonable in terms of the cost to run them. The security is actually where a lot of our clients spend a lot more time managing the security because they are not only focused on what the model uses, but the steps and the processes it takes. Pega helps our clients to manage that risk of how the model will execute. On the monetization side, Pega believes that the more that our system performs automation, the more that we should share in that cost savings or that revenue share.
We do that by calculating a certain amount of usage, so to speak. A case is a unit of measure. That's a very common way we connect to usage. As the models run, the models will drive more automation. The more automation turns into cases, Pega monetizes based on that increased volume of automation. The cost side, I believe there's lots of market forcing pressures to keep it reasonable. We get paid based on activity that the system is operating for our clients to automate and streamline activities and events.
To add to that, I think this was a place where, again, we were well set up for what AI was doing because we had moved away from user-based pricing years ago, right? We always felt like if we were driving more automation, the way to capture and monetize that was the amount of automation we were driving, not the amount of users on the system. If we were doing our job, the amount of users on the system should be going to go down, right? For a long time, we've built around this amount of automation. That sets us up really well with our clients because now as we drive more automation through it, we have the contractual models in place to support that.
I would just say one point, and I think that, you know, this happened before I joined Pega, but Pega was driving so much value with our clients that there were points in time where clients were coming back saying, "I don't need to renew for the same number of users because you've helped me reduce the amount of people that are needed to actually execute this workflow," which is what drove us to the point that Don, you know, Don kind of alluded to that in his comment, saying we actually said, "That's not a fair relationship," right? A fair relationship is if we cut your cost by half, we shouldn't receive half of that. We should actually receive more because the system is doing the work. That was something that we really got onto, you know, 10, 15 years ago in a big way.
Now we look smart to have done that, but the reality is we were trying to solve a different problem, which is we were solving your problem of efficiency. We need to have a commercial model that makes sense for that. Now it just turns out that, you know, not having a user-based model is actually really advantageous in the world of AI.
Yeah, yeah, for sure. I want to ask, we got a question just popped in here from the audience. It's really around the third-party models that you're using. Which ones are you using? Are you building any of your own, or what's sort of the needs for you to be able to deliver things like Blueprint?
I'll start real quick and then Don could give specifics. We are not building our own model. We want to be very open. We believe there's a lot more value in helping manage the work than it is to try to create a commodity type-based model or a specific model for our workflows or for our business. Don can talk specifically about all the different models we work with.
Yeah, and I'm going to put a little CTO specificity on what Ken just said, which is we are not building our own large language models.
Yeah.
We actually, for a long time, even prior to ChatGPT, were working with our clients to allow them to build their own machine learning models, their own NLP models, their own predictive models. Those models continue to be very, very useful because some of the things that large language models are actually not particularly good at are things that are very mathy, like predicting the likelihood of a client to respond to a particular offer. We're continuing to work with our clients to build those models that stay proprietary to them and their unique data sets and their unique client needs. On the large language model side, when this first showed up, we realized very quickly that there was going to be a sort of multi-model world.
We designed the architecture of what we call Pega GenAI to begin with to allow us to plug and swap different models in because we saw that as the models were developing, certain models are faster, certain models run a little bit slower but give better results. Certain models are better at ingesting documents than other models. Behind the scenes with Blueprint, we're using a combination of some OpenAI models, GPT-4. We've been starting to experiment with GPT-5. We've also been using Claude from Anthropic. We're running that on top of AWS Bedrock. AWS has been a huge partner for us in a lot of this journey. We're using a lot more of some of their capabilities. The important thing we found is the ability to swap models in and out as we add new use cases and capability to Blueprint.
As Blueprint has become truly agentic, it's actually arbitrating across a bunch of different models to find the right model to get the job done.
We're not going to bet on which model is going to win. We know there are going to be multiple models. We don't think there are going to be 50. Maybe there might be less than 10. We just don't want to be in the game of having to bet which one is going to be better for which use case. We've just been as open as we can in terms of leveraging those models. For example, in our Pega Cloud for Government, which we run on AWS, we use Bedrock for that, for example, in the model there. It really just depends on the situation. Clients can also bring their own too, right? I mean, they can actually decide that they want to use a specific model. To the extent that we can support them on that, we will.
I think to Don's point, we just don't want to be betting on the large language model. I think that's where the question was pointing to when they said model. I'm assuming they meant the large language models, which was, thank you for the clarification, Don, on my answer.
Don, maybe I don't want to revisit something we spoke a little bit earlier on, but just to understand, you know, we were all seeing a lot of scary things for the software industry with agentic AI kind of moving in and taking over. You know, besides Blueprint, if there's other players out there that have built agents, how do they interact with your core installations of Pega Cloud or Pega On-Prem? You know, what's the opportunity and the threat for Pega?
We announced at PegaWorld, our user conference, a capability we call Agentic Process Fabric. What that is really about is using, again, Ken talked about the fact that our knowledge is we know the workflows. We know the steps that have to get done in order to deliver meaningful business outcomes in what are often highly regulated business situations for our clients. We want those workflows to both be accessible to a wide variety of agents, right? We had already had an API that we call the DX or the Digital Experience API. That's what allowed, for example, us to have a Salesforce Lightning screen running Pega workflow seamlessly. We quickly extended that to become what we call the Agent Experience API, the AgentX API, so that any agent, whether it's our agent or somebody else's, can call into Pega to initiate a workflow.
The workflow can then dynamically, in real time, tell the agent what it needs to do next. The workflow can literally give instructions to the agent in real time. We're continuing now to advance that to use things like MCP and A2A, which are emerging sort of standards at both the agent and tool interoperability space. The other thing we realized was going to be really important is our workflows are really good at assigning work to people. They're really good at automating work that maybe used to be assigned to people. Now they can assign work to an agent. The power of that is a lot of the structures that you use when you're assigning work to people are still really useful when you assign work to an agent.
You want to make sure you assign it to the right person or the right agent that has the right set of skills. You want to be able to run quality checks to make sure the agent actually did the work the right way. You want to be able to have a feedback loop. If the agent gets something wrong, you can push it back into it. You want to be able to have an escalation point. If the agent can't figure something out, it has a way of pushing it forward to the next step. We even have all of that already in place in the system, right? Now we've just turned it so that it can actually orchestrate agents as well. To answer your question, we can have other agents, Pega and otherwise, calling into Pega workflows.
We can also have the Pega workflow calling out to either Pega agents or third-party agents to do individual tasks within the workflow. We are still maintaining that workflow governance to ensure that the necessary steps, the mandatory steps, the best practices that companies have spent, oftentimes, decades developing, and in many ways represent their competitive advantage when compared to competitors, get followed in a predictable, consistent, and repeatable way.
It is not a threat of replacement necessarily, but a new way to leverage your platform.
We've seen what's happening with AI and agentic AI as a really exciting accelerant. Like both Blueprint, the fact that we're able now to build and design and deliver workflows, in some cases 50% faster or more than we ever could before. The fact that we're able to accelerate a lot of our selling process because I can literally, in the first meeting, be showing a client what is essentially a bespoke personalized demo of what Pega would look like in their environment. I can show it to them in minutes without any engineering effort.
The fact that we're able to plug into both our agents and other agents to orchestrate work across the business, we just think it creates a huge new set of opportunities for us, especially the unlocking of the legacy transformation, which I think is going to be a huge area of investment for enterprises as they look to modernize to keep up with all of the changes that are happening.
I think that what maybe freaks people out a little bit, Blair, is that there are real disruptive areas that AI is going to disrupt. For example, if I'm using a tool to produce a dashboard that's simply organizing data that I can go to AI and say, "Tell me what the insights are," that's a real disruptive event. That is going to be very challenging to argue why AI could not actually go take the data and do the exact same view that two financial analysts on our team could do. What happens is you see that use case and then you want to extrapolate that to everything.
The reality is there is just work that needs to follow a specific process, that needs to be able to execute a certain way, whether that's for internal controls, whether that's for a compliance issue, the E.U., AI Act, whether that be regulatory standards, like the payment card industry standard for credit card transactions. There is a lot of work to protect consumers from certain information following a consistent path. You can say to a consumer, "This is how your loan was originated. Here's how the decision was made. Here's how your transaction was processed." When you have situations like that, it's very different than just producing a dashboard versus a text field to tell you what the answer on the analysis was on the dashboard.
I think what's happening is investors in many, and you know, in some cases, the industry gets confused on the differences between these use cases. We don't do any of the simple use case of like, "Let's just throw some data in rows and columns." Most of what we do, I would venture to say, materially, all of what we do is done with Pega, not because Pega is just a simple tool to be able to do open-close tickets. It's because they use us because of the power of the platform. That's exactly the reason why generative AI is complementary and not competitive to that differentiation.
Don, I want to ask you on a couple of other areas in the business, which is a fairly small part of the business, but I want to understand sort of what's the impact from AI on things like traditional robotic process automation or screen scraping, if you will. Is there any change, does that go away eventually, do you think, or what happens there?
We've never thought hugely about sort of RPA as a standalone business, right? When we acquired OpenSpan, which I think was like eight, nine years ago now, the initial driver of that was because we thought it was the complement to the workflow orchestration we were already doing. The workflow orchestration, the getting the work done to the outcome, was the real value. RPA gave us the ability to plug in and get data faster or maybe go to systems that didn't have nice APIs. We've continued to really use it in that way. I think over time, more and more of that will happen, more and more of that will begin to erode because of two things. One, in some cases, AI might be able to drive some of it.
In other cases, if we're successful in driving some of this legacy transformation, we'll be moving our clients off of these old systems where they don't have APIs and onto modern new cloud architecture where the data is API accessible. If you have good APIs, you don't need any of this RPA stuff to begin with, right? I think as a stopgap, as I look out over the next three to five years, we're going to continue to use the RPA technology as a way of getting at some of those systems and getting at some of that data that we need. Ultimately, our goal is not to sell a bunch of RPA. Our goal is to drive workflow orchestration and decision management at scale for our clients. We've always thought that RPA was just an accelerant and a useful tool in helping us do that.
Yeah, we've said, and Blair, you know from conversations you've had with me, I was criticized heavily over the last seven or eight years because we didn't go deeper into screen scraping and desktop automation. I have said that we did not believe, we thought it was a band-aid. It's duct taping your windows shut in the home. It's not actually fixing the issue. I think what's proving that to be a band-aid is if you look at all of the RPA companies, what are they all doing now? They're trying to turn what they did using agents. They're trying to have agents do the RPA. I think to Don's point, we never thought it was something that was going to be a long-term trend. We thought about it as a break-fix short-term kind of band-aid. I think it was.
It helped clients advance in places where they couldn't redo the application at the speed that was needed. Now what you're seeing is even the vendors themselves are recognizing they've got to make it agentic, right? The RPA, and there's just too much breaking. The robots break, they get confused, too much manual interaction. What Don was talking about is we value the robotics that we have inside in the operability of the Pega platform, right? Actually helping, and with AI robotics and the platform, it's all very complementary of what you use when.
Yeah, yeah. Okay, that's great. That makes sense. You know, Don, I wanted to just talk to you a little bit more about some of the other innovations you guys have put out there in the field in the last year or so that, besides Blueprint, which we've talked about, just some of the other AI-driven enhancements that you've made to the platform. You know, what would you call it to say, "Hey, these are the ones that are really resonating with customers?
I think the Agentic Process Fabric that we talked about, the ability to think about how they stitch these agents together and really, again, orchestrating agents against what's the outcome we're trying to drive, right? I think the interesting thing, and McKinsey just did this study where they were talking about how, I think they said something about like 8 in 10 CIOs have said that they've started implementing AI, and roughly the same number are still trying to figure out where the value is, right? I think the value comes by looking at the outcomes you're trying to drive. How do I drive better efficiency for the business? How do I help my customers get their service requests driven faster? Agentic Process Fabric gives you the ability to orchestrate your agents against the outcome you want to get done.
I think clients really appreciate that as a pragmatic way to use this stuff. There's also a lot of stuff that we've been doing to support, in addition to Blueprint, the acceleration, for example, of bringing apps to live. Like a big challenge enterprises have is testing. If I'm going to take an app live, I've got to be able to test it. Every time I want to upgrade it or change it, I want to be able to automate that regression testing so I can make my change quickly. That used to require you to have a whole bunch of developers write a bunch of test cases, which is, but one, slow, and two, developers hate it. They hate doing that.
We've now embedded tools with AI that will actually generate all the test cases for you so that you get an app that you can deploy faster and you free your developers to work on the things that are really meaningful and the stuff that they actually like doing.
Yeah, it's interesting. We're coming up on our time here. Maybe one more for you, Ken, if I can. Just, you know, with your crystal ball, as you kind of, and I'm not asking for guidance, I'm just looking at saying, okay, it looks like the AI that you're doing is going to enhance the value of your platform. Does it accelerate your revenue in the next five years? And number two, does it accelerate at higher margin revenue or lower margin revenue? Because it's way more sophisticated, you know, what has to be done to deliver these systems.
If I had to predict, I would predict that, and I think these go hand in hand, that the ability to get started on a legacy transformation project is going to be faster. I would also predict that the getting it done is going to be faster, which I think then is going to drive faster systems being legacy transformed. If those things are all true, we will have more value put into the intellectual property versus the professional services that are needed to implement it. We'll have less operating costs, more people on the cloud, more automation and value, and more transactions going through our systems. I think that would all lead in the direction of us having a great opportunity in front of us.
Okay, great, great, great way to summarize and bring this to a close. Don, Ken, really appreciate it. Great to see you guys. We'll let you drive on.
Thanks, Blair.
Thanks .