All right, gentlemen, let's cut to it. Our technology team has been slipping. The competition is fast, flashy. I need a big star, that one big-name solution, the kind of player that fills the stadium. Find me that, and we are back on top.
I don't know, single star, that's not really how championships are won. You don't build a legacy on one headlining player. You build it on balance, on a team where every position is covered and every role matters.
Exactly. Think about it. What if your star crashes or can't adapt to the game when the game changes? You need flexibility, you need depth. That's what SAP BTP gives us: a whole roster of players, each one bringing a critical skill set to the field.
A roster? Sounds expensive. Sounds complicated. I want something simple. One tool, one answer.
It does not have to be complicated. It is all connected. With SAP BTP, you get safe extensions for your core SAP solutions, clean, upgrade-stable, and cloud-ready, and no more breaking the foundation just to add a new play.
Oh no, what do you mean?
There is integration. We're talking smooth handoffs between SAP and non-SAP systems. No hospital passes. Everything works as a unit.
Yeah, plus automation and process improvements. Imagine players that not only run the plays that you call, but also anticipate what's next. AI, this year, it's not just hype.
With Joule, with SAP AI Core, with agents, you're getting players who see the entire field, analyze it in real time, and call the winning shots.
You're saying, I don't need a single superstar. I need a team that covers every angle.
Exactly.
Think of it this way: data is your playbook, AI is your game intelligence, extensions as your specialized positions, and integration is the teamwork that brings it all together. Together, they're unstoppable.
That is why we're not here to sign just one. We're here to scout globally: Berlin, Sydney, Bangalore, Louisville. That is where the next generation of BTP talent is meeting in November. Developers who can extend, integrate, automate, and innovate on SAP like nobody else.
All right. You've convinced me that I'm willing to let you take this trip. Find me this team of champions.
Kazimir and I'll be back.
Don't worry. We're building the kind of lineup that doesn't just play the game, it changes it.
When we're done, you'll see SAP BTP isn't about one star, it's about winning the season, every season.
All right, let's get out of here. All right, where do you think we should start? How about Cloud Application Programming Model? Vitali, what can we see there?
All right, who can present it?
It's the first step.
Second step. I could give Tom a simple answer, but you know me. This is going to be "Hold my coffee, I'll give you something to impress." I am using SAP HANA Cloud Instance, and in this instance, I activated TripleStore, something that allows you to use Knowledge Graph Engine in SAP HANA. Obviously, I know who is going to present all the demos today, so I created an ontology, and this ontology shows you that there are SAP Developer Advocates who are presenting different demos using different capabilities of different products. All of this is available as triples already. All of that is available already as triples in the Knowledge Graph loaded into HANA, and to see it, I will show you this in a minute, no worries. At least you saw some Sparkle Code, right? All of you love Sparkle code. Who does not love Sparkle Code?
Okay, no one raised their hand. To make it a little bit easier, we've built this Python package for HANA and LangChain integration that would allow you to run Q&A integration. I'm connecting to this very instance that I just showed you, and then I'm instantiating the Knowledge Graph, the one that I just wanted to show you some data from. You can see at least schema for that Knowledge Graph, and then I'm using one of the many LLM models that are available in SAP AI Core GenAI Hub, and read this part of the code once you are watching the recording, okay? I'm asking this question in just natural language. What demo will show Cloud Application Programming Model and who can show this? As you can see, LLM is getting this question, generating the Sparkle Code query.
I know all of you would love to write this by yourself, but let LLM do this, and then it returns the answers. The demo titled "SAP CAP, MCP, and AI Agents" will show Cloud Application Programming Model, and two people who would show this are Nico and DJ. Because this is RDF and this is resource description, I can link this to resources on the web. Tom, that's your answer.
All right, so now that we know who our first tryout will be, DJ and Nico.
All right, DJ.
Hi.
Yesterday in the keynote, I think we saw that finally SAP development is coming together all in one place.
What, to that mythical one IDE to rule them all?
Yeah, pretty much. We now got SAP Build, ABAP Cloud, you heard that, and CAP all coming together in Visual Studio Code.
ABAP. I mean, that is sacred territory, right? I can jump though from TM38 straight to VS Code. Any more fellow dinosaurs there? Get that reference, TM38, anybody? Put your hands up. Yes, over there. Fantastic.
We now have the entire stack from UI5, CAP, Fiori, ABAP, HANA, all in one, by the way, open source-based IDE.
Amazing, amazing. Also because the SAP Business Application Studio is also based on the same framework, the open source framework VS Code OSS, same experience there.
Exactly. It is not about just typing the code any longer, it is about intelligent development.
Artificially intelligent?
Right. If we combine SAP development with the open ecosystem of AI tooling out there, things get really interesting. I would like to show you Klein, which is one of the available AI assistants in VS Code.
I do like the term assistant. It reminds me of a buddy working with you rather than this crazy type editing that just gets in my way.
Right. It's more like a co-developer that can design, install, test, and most importantly, fix things for you.
Sounds amazing, but where's the SAP connection?
I know you would ask, so this instance of Klein is powered by SAP AI Core, which is SAP Secure AI Engine. As you can see the connection details here, and that means all your data prompts and AI credits stay within the SAP BTP trust boundary. It is compliant, governed, and enterprise grade.
Sounds like AI with corporate guardrails, more or less, right? Plus, looking at the AI Core, we've also got on the other end, lending domain-specific knowledge, these MCP servers. As we all, I hope, learned in DevToberfest, Model Context Protocol, we have MCP servers, right?
Exactly. These MCP servers have access to data and functions in an SAP context. We have the UI5 and CAP MCP server, for example, installed here.
There's also a Fiori one, but let's have a look maybe at the CAP one first.
Yep. I'll just fire a prompt.
Cool.
Generate an SAP CAP app with a product service.
No typos.
That was the most exciting part. We launched this in plan mode, so we will see what it is about to do before it executes the steps.
I see straight away that one of the tools within the MCP server for CAP is Search Docs, and it's using that first of all, right?
Exactly. It's going to the documentation to read CAP before it actually does anything, which I really appreciate.
Capire, Capire did you get that? Excellent.
Right. We can see that it is searching the docs, it's doing a bunch of things.
Yep.
I'll now switch to act mode, and it should start doing something to the file system.
We can also see that it's telling us what it's going to do in detail, so we can actually review that if we want to before we go ahead.
Exactly. Since we're short on time, I'll just jump into the future right here.
We're in the future right now, come on.
This is the final product, so this is what might have been generated by the Klein MCP.
Maybe here we can add some UI5?
Sure, we can. Let's just say, generate a UI5 app for the product service.
Nice, nice. This is going to be a freestyle UI5 app, hopefully with a nice view.
Hopefully.
Let's see what it does.
It should go to the CAP MCP first to search the model, so it makes sense to get a good understanding of the data model before you build a UI on top.
When we say good understanding, we're not sort of airy-fairy good understanding. This is basically grabbing a seasoned representation of the actual data model, right?
It does, exactly. Yeah, here in a subsequent step, it goes to the UI5 MCP to get information on how to create a UI5 application. It's actually getting data from that MCP on how to execute these steps, and then it goes on and does it.
What we've got in the background are some rules, some general rules for Klein, but we've also got, I think within the UI5 app's MCP server configuration, we've got some rules to tell the agent to use specific tools to start with, right?
Right. It's all about guiding the agent so it does what we want them to do.
Cool. Okay, where are we right now?
All right. Again, I jumped into the future once more, and this is what a view looks like.
I'm not sure about the color coding, but I do like the XML. I'm an XML view person. I love that. It's quite nice. Four spaces for indentation, I'm not so sure, but in terms of what it's generated, I'm happy.
Yeah, and I'm happy too, because we now have really the whole package. We got SAP Development AI Assistant end-to-end, both in VS Code and the Business Application Studio. It's all powered by SAP AI Core and probably most importantly, guided by these SAP delivered MCP servers.
Elevating us developers. Elevating, literally, we're standing on a box here. Elevating us developers to be smarter, faster, and more creative.
Right.
That's it. Back over to you two.
Great use of AI.
Yeah.
A free agent that we could sign right now. For our next pro day workout, let's see a Ajay.
Okay, so now I'm going to show you how to event enable our SAP Cloud application. To do that, I'm going to use a new player onboarding scenario. We need to onboard all our new players into our favorite HCM system, which is SAP SuccessFactors. Now, it would be really cool that as soon as I onboard a new member, I could automatically trigger some processes which could inform my other cloud applications that a new employee has been hired. Let me show you how we can easily do that. I'm in my SAP BTP global account system landscape, and I already have SAP SuccessFactors tenant. I have SAP Build Process Automation. I have an SAP BTP application. In this case, it is a custom CAP application which can receive SAP SuccessFactors events.
I also have SAP Cloud Application Event Hub, which is an event broker within BTP that we can leverage to seamlessly configure the flow of events. To do so, I'll start with creating a formation of type Eventing between SAP Cloud systems. All I need to do is add a new system which I want to receive events in. Let's go ahead and do that. Now, while it's processing, what's happening under the hood is that all the systems in my formation are automatically establishing connection with SAP Cloud Application Event Hub so that they're ready to exchange the events. If I need to receive an event from a different system, then I need to define an integration dependency. I'm in my CAP service instance, and let's go ahead and define one new.
Let's call it SAP SuccessFactors new hire. My source system namespace is SAP.SF. The events that I'm interested in receiving are the following. With this, I have created my integration dependency, and all I need to do next is to enable the subscription. To do that, let me show you how. We need to go to SAP Cloud Application Event Hub UI, and there I can see all my subscriptions. Just by enabling the toggle button, my subscription is enabled. Just like that, my broker knows that if it receives any of the events from SuccessFactors which we just described, it will immediately send them to our SAP BTP CAP application. Now let's go to SAP SuccessFactors, and then we try to add a new hire to onboarding. Let's provide some details.
First name, Oscar, and the last name as Benji. Let's provide an email address, so oscar.benji@sap.com. We need to give a hiring date. It should be as soon as possible, so today's date. I want to hire this player from the US region. I already know that the hiring manager is Blue Beard. Yeah. I give the reason as new hire. Let me initiate onboarding. In a few seconds, all my systems where I have enabled subscriptions will be receiving these events. If you remember, we enabled a CAP application to receive this event, so let's check whether we already received this event. Yes, we have successfully received the event from SAP SuccessFactors in our CAP application. Let's also go to SAP Build Process Automation, which was also one of the subscriptions, and let's check.
Yes, we have received the event here as well. Just like that, we have easily configured the exchange of events between the SAP Cloud applications. In this case, SAP SuccessFactors, SAP Cloud Application Event Hub, and a custom BTP application. Do remember that we can easily forward these events also to other SAP BTP services like SAP Integration Suite and Advanced Event Mesh, thereby making it available in our larger event-driven architecture and landscape. Over to you, Tom.
Yeah, very nice. Yeah, that Eventing stuff, it's so powerful and easy to use.
I know, right?
Next up on my list is Nora.
Nora.
Thank you. In the keynote yesterday, we heard how SAP is opening up their developer ecosystem and how we are already compatible with a lot of different AI agent frameworks out there. This flexibility is really one of the things I love most about our AI strategy. Now, I will show you how you can build an AI agent using LangGraph and GenAI Hub. My colleague Riley came up to me the other day and he told me about all the things that he has to do to quality test a tutorial on developers.sap.com. He asked me if we could not automate that using an AI agent, right? I was like, yeah, of course we can. Love to do that. Yeah, what I did was I immediately started thinking about which tools my AI agent would need to actually be able to test a tutorial.
First of all, it definitely needs browser access, right? Because I wanted to be able to open BTP and other services. I am going to use the Playwright MCP server for that to automate the browser automation. I also needed to have access to a terminal so that it can create files, write code, and actually also run code. I wanted to have access to the grounding service of AI Foundation in case it needs to look up any SAP-specific information. Okay, this is the tutorial that we will be testing here today. Actually, we will not be testing it; the agent is hopefully going to test it, right? It is a tutorial about how to create an orchestration workflow in AI Launchpad. It is supposed to look something like this in the end, right? This is my code.
Here you can see that I am using the SAP Cloud SDK for AI and the Python version to initialize my models, so the GenAI Hub SDK. I'm using one of the more than 30 models that we have available. Here I'm using Claude for Sonnet. Here you can see that I'm using the prompt registry of GenAI Hub so that I can actually store my agent prompt there because these prompts can get rather large, and I want to keep it separately from my code base. Okay, what else? Here we have the MCP server, the Playwright MCP server I use to automate browser access, and then we actually can already create our agent.
Here you can see that I create the agent, and I'm adding the LLM as the brain, right, and all the tools that it needs to actually test a tutorial for me. Okay, now we can actually already run our agent. I'm going to stream the response so that we can see what it's thinking. Okay, this is the live stream while I'm running this, so I hope it does what I want it to do, but it's an AI agent, you never know. Okay, I'm not doing anything anymore, right? This is all the agent now. What it is supposed to do now, yeah, exactly. Okay, it did it, that's good. What happens now is that the agent sends all the information it has. The agent prompts, all the tools it has available, and obviously the tutorial instructions in Markdown to the LLM.
The LLM then reads this entire tutorial, creates like a step-by-step of what it wants to do and how it can get through this tutorial. Very important, it decides on which tool to use first. In this case, it used the Navigate-to tool to open AI Launchpad, which it did. The tool returns the current state. For example, a screenshot of the webpage, or it extracts the HTML code and sends that back to the agent. Now the agent takes all that information and also, again, all the history of what the agent did until now, takes that, sends it back to the LLM. The LLM checks the state it's at, checks what it should do next in the tutorial, decides on which tool to use next, for example, a click tool or filling in the variables here, sends that back to the agent.
The agent actually calls the tool, and you get the point, right? It does that over and over and over again until it actually tested this entire tutorial for us. What it should do in the end is obviously give us feedback on this tutorial so that my colleague Riley does not have to test all these tutorials from SAP anymore, right? You might have noticed something here. It is now creating all these little modules and does everything as described in the tutorial, but it skipped the first module, the grounding one. We will see it in a bit when it goes to the next module, because that was actually not part of the tutorial, right? This is a rather new feature that was released after the tutorial was created.
The agent got a little bit confused and did not really know what to do with it, so it just skipped it. If I would be the tutorial tester here, I would give that as feedback, right? That you should say what the tutorial user is doing with this module. I hope it actually gives that as feedback. I obviously do not know because, you never know. Once this agent is done configuring all these steps, it should just close the browser window and give us feedback. How much time do I have? A minute. Okay, let's see. It configures the last things, the translation part. Okay, and then it will just close it any second now and give me feedback. Hold on. Tutorial structure clarity, it always says that.
Okay, here it says, it's very dramatic about it, but it does have a point. The grounding module confusion. It was a little bit confused by the grounding module, and it says the tutorial should mention whether to disable it or whatever to do with it. Anyway, it's really good feedback. That's exactly what I would have given as well. Riley, I think good news for you. You can use this to at least prescan the tutorials from now, and you don't have to do it yourself. Hi, Riley, by the way.
That's really cool. Now that I think about it, you know, AI agents taking over your machine is kind of like when you have a trusted quarterback and you let them call an audible on the field.
Exactly. Now for more AI-powered scouting with Daniel and Rekha.
We certainly know about SAP Joule, SAP's generative AI copilot. Let's see how to extend its capability using SAP custom skills and Joule AI agents using SAP Build Joule Studio. Over to you, Daniel.
Thank you, Rekha. To create a Joule extension, we go into the Build lobby like we would create an app or a process, and now we have the option to create a new Joule Studio project. I'll take an existing project and an existing skill, which is designed to bring back new hire processes. In the General tab, we give it a name and a description, and now the description is vital because it determines when Joule will trigger this skill. There are also several settings, including one where it determines whether I return the message to the user or I let Joule create it itself using data that I return to it. I can also define parameters that are required by the skill, and Joule will prompt for them if the user doesn't provide them. I also have output parameters.
This is the data I provide back to Joule when I want Joule to create the messages. Now that I've configured the skill, all that's left to do is to figure out what the skill is going to do. I can call an action to bring back data from the backend. I can send a message to the user. I can call another skill, or I can call a process or automation. Calling a process automation is quite powerful because it means that the entire extensive capability of process automation is already built into Joule Studio. This process—I'm sorry, this skill—calls some process automation APIs and then provides a message back to the user, binding the data from the action. Now, remember Ajay created a new hire, and he sent an event from SuccessFactors to BTP to process automation?
We created a process that would be triggered by that event. In fact, here you can see this was triggered at 9:16 just a few minutes ago, and you can see in the context we have the new hire that he had, Oscar Benji. We also created a Joule skill that would return the process or information about the last process that was run. You can see I already did a prompt for the last process, and you can see Oscar Benji, and you can see the manager information. Once I have that, I can now ask for the approval tasks for that process. We'll give it a second. We will get back the approval tasks for that process, and it will give us the due date and when it started and the task owner.
We can also now ask Joule to go all the way back to SuccessFactors, get us information about the task owner, and we can see, oh, our lovable Blue Beard; he's the manager. We will ask Joule to approve the task. Okay, we will wait for it to approve. It says it’s approved. I can go back to the monitoring tab. I can refresh. Just give it a second. We can see that the task was approved, the process completed, all within Joule using a low-code skill we created with Joule Studio. That is pretty good. What is even more impressive is what you can do with agents. Rekha, can you show us?
Yeah, sure, Daniel. I'm going to roll up my sleeves a little bit and let's see how to create a custom AI agent within Joule Studio Project. For that, click on Create, select Joule Agent. I'll provide the name and description. It is an agent to help the direct manager with the new hire relocation benefits. Description. Go on Create. The agent builder opens up. We provide the expertise; that is what the agent is specialized in. Next, we provide the instructions—how the agent is going to achieve its goals and tasks. Additional context is the behavior and tone of the agent. Key components, LLMs and tools, LLM and Anthropic I'm going to choose. Base model acts as a starting point and advanced model for deep and complex analysis.
Under advanced configuration, select pre-processing if you want the prompt to be decomposed and planned before sending to the LLM. Post-processing is for response or answer refinement before showing it to the user. Add the calculator for the total of the relocation costs. Documents from the SAP AI Core document grounding, the relocation policies. Connect to the AI Core destination and resource group ID from the AI Launchpad. This is my resource group ID. The document collection ID again from the data pipeline. Finally, the Joule scale, what Daniel has created, we can attach here so that it composes an email and sends it to the new hire. That’s it. Save, release, and deploy. I have a deployed version here. Direct manager has issued a prompt.
Joule has got the results, and the calculator has done the total of the calculation, and Joule has composed an email and then sent it to the new hire. This is the email to the new hire, you can see. It has summarized what it has sent, and that's it. Cool.
Joule tooling for low code, what will they think of next?
Yeah, now you don't have to be a pro player with a stadium contract to step up and take a few swings.
All right, who's next on the list?
Next we have Shilpa and Sheena.
Hey, Sheena. Now that employees have been onboarded, they need a laptop and mobile phone to get started. Any easier way to procure these?
Sure, Shilpa. We have our in-house shopping cart app, which is now enhanced with product recommendations as well.
Oh, recommendations? That's something interesting. I want to know.
Yeah, I can show you. This is our new equipment request form, which is used to search for devices like headsets, mobile phones, notebooks, and so on. Let me show you some examples. I'm selecting notebooks from the item category list, and you can see that the app already provided me two options in the item list.
Wow, this is so cool, and it's so relevant. How did you get these?
Yeah, so these product recommendations are coming from an ISLM machine learning scenario powered by Tableau AI. This scenario is trained on the equipment request database, and it can read input values and provide a list of values for the output fields.
Great. Now this is an ISLM app, and the other one is a Fiori app. How are these connected together?
For that, I have to take you back to ADT. You have to inbox the scenario within your application, and for that, you have to adjust the behavior definition of your CDS view. This can be done by using the keyword “recommendation” function, provide a function name executed on field, and then provide your input field. You have to provide this within the side effects keyword. You also have to provide the output field with the syntax “field recommendations,” and the corresponding method will have the logic to call the ISLM scenario created earlier. Within the ISLM scenario call, you will be passing a JSON payload consisting of the input values and a top-end query to search the top recommendations using the ISLM scenario.
Once you receive the results, the only task remaining is to map it back to the output field, and there you have your results displayed.
Okay, since you have trained on the given data model, I think I can use this in many other use cases while creating a transaction, right?
Exactly, Shilpa. Users can create the transactions in no time. I would also like to highlight how we can use the ABAP AI SDK to leverage LLMs to create or search for general information from the internet—for example, product reviews. Let me show you another example. I'm selecting a product, let’s say a MacBook Pro, from this list, and you can see that a short product review already appears in the field. To show you how this works, I need to take you back to ADT. I have created an SAP GenAI scenario in ADT that uses a model based on GPT-40 Mini to search the internet for product reviews and return the results to the calling program.
Now, to invoke the scenario within your application, you need to use the side effects keyword again and provide the triggering and target fields. You will have a corresponding method that contains the logic to call the SAP GenAI scenario. In this method, you call the model, which contains the prompts that I want to supply to the LLM to search for the product reviews from the internet. What I want to highlight here is that everything I’ve done so far—creating the SAP GenAI scenario, the implementation—can all be done within ADT without switching screens.
What I really liked is you have combined ISLM powered by Tableau AI and ABAP AI SDK, making it more effective.
Yeah.
Sheena, in the list report, I saw some products with various status. How are you tracking these?
Right now, I just apply a filter. Is there a better option, Shilpa?
Since we're talking about ABAP Cloud, let me highlight the latest innovation in graph analytics, which is the review booklet generator.
Wait, what is a review booklet?
You can use review booklet apps to display, analyze, and validate data. In analytics, you often need more than one query, and that’s where the review booklet is so helpful and convenient. You can use multiple queries, and for each one, you will get a pre-configured display page. Let me show how we can see it.
Yeah.
On the same base data model, I have created an analytical cube. Here, I have given the order quantity and the price. I’ve kept the aggregation defaulted to sum, which makes it a measure. I’ve created two queries: one for the product list and the status, and one defining some rows and columns. All you need is a service, and the queries have been exposed in the service. The prerequisite to use a review booklet is that the service must be bound to information access, and that is enough. Here, I have set the binding type as “enough.” Now let’s see what happens. I just right-click, and then you can generate ABAP repository objects. The wizard is going to make your life so simple.
You will get the review booklet generator app as an option, and the wizard will create a review booklet configuration, an app descriptor, an IAM app, and everything needed to create a Fiori application.
It does create an actual review booklet app
Yes, a review booklet app is created, but we are not finished yet.
Okay.
We need to further configure it based on what you need, such as dimensions and measures. For that, we need to open the review booklet in the Review Booklet Designer, which I navigated to from ADT.
This is integrated with ADT.
It’s not just the integration. The context of the queries used in the service will also be passed to the Review Booklet Designer.
Oh.
Let's see what is happening. it's taking a while, which always happens in the live demo.
Yeah.
The analytical queries are loading now. For each query, you can see the analytical details, and I also highlighted the business pages.
Yes.
For every analytical query, a business page is generated by default with the basic configuration. Here you have two generated pages: one for the product list and another for the status. It is taking some time to load the data because this is a live demo. These are the different layouts, and by default, it is a single pivot table. You can also use other layouts. Everything looks good. Let's do a quick preview. This is the same preview you get in the Fiori application available in the Fiori Launchpad.
Oh, nice.
Let's see how it is.
The preview looks excellent, with clear slicing and dicing.
You can view data by category, item, and status, including total price and order quantity. This also helps us understand where we stand with the current status.
Wow.
You get a good pictorial representation of your products status.
The pie chart looks great. Can I view my top suppliers?
Yes, that’s very easy and intuitive. You can click here to change your layout. Since you want top suppliers, I’ll just add them. Here you go—your top suppliers. I can also apply a different sort order. If you're fine with this, you can save the layout as a new page or override the existing page. This is now ready for business users to use as a review booklet application.
Wonderful. With that, we know that modern ABAP brings together transactional power.
Real-time analytics.
Artificial intelligence.
Thank you.
Thank you.
Over to you, John.
I'm so inspired by what I see. I want to be part of this team too, and I've got a special prospect to work with.
All right, it looks like we're not in the majors now. This is the youth league, so everybody beware. The word is there's an emerging new star that we need to check out.
That's right. This player's got a mentor, a veteran coach with a legendary career. We're talking ABAP, one of the most experienced players in the game.
This new talent that we're here to check out today, ABAP as a Visual Studio Code extension. Let's meet them both.
Let’s start simple. Here we can see my local package inside VS Code with a live connection to our backend ABAP system. Navigation is smooth, and I can move through our classes and CDS artifacts just as you’d expect. Creating objects is easy from the command palette. I can come here and choose “New ABAP Object,” then select “Class.” I can assign it to my $temp package, give it a name, ZCL Agent C’s, and a description such as “Demo.” After hitting Enter a couple of times, the new object appears. Now you can see the object tree updated on the left.
Let’s go ahead and save this. Hopefully, we can activate it. Excellent — it activated successfully. I can also run it from here. You can see the results in the console, just like we see in ADT. Debugging works perfectly as well. I can set a breakpoint right here and run it again. You can see that execution stops at line 18, and all the details are available. We can inspect variables, watchpoints, the call stack, and breakpoints — everything is here. I’ll step over the first statement, and now you can see that the Agencies table has new data in it. Here’s the real highlight: we’re not limited to ABAP. Now that we’re in Visual Studio Code, we have CAP and Fiori application development available in the same IDE.
If we look at our CAP application, we can open our JavaScript files as well as the Fiori annotations for our Fiori applications. Having everything in a single IDE makes it a full-stack SAP developer’s dream. This is exciting. As ABAP continues to evolve as a language server, things become even more interesting. I’ll save the rest for next year’s developer keynote, so stay tuned.
This is a promising new solution. It is still in development and needs additional training and refinement, but the potential is incredible. The SAP team is putting in the work to prepare it for full readiness. The future looks strong. ABAP as a language server operating across multiple IDEs, combined with agentic AI and a single integrated development environment, unifies all SAP development types.
It’s a field of possibilities for developers.
Build it and they will come. It’s time to see the entire team. Every great lineup deserves attention. Extensions, integration, automation, and processes, along with AI and tools for developers, data and MCP agents, knowledge graphs, and intelligent applications — all coming together. Each component is strong on its own, and together they become unbeatable. With open developer tooling, agnostic coding agents that work across developer tools, and Joule for developers decoupled from the backend so it’s usable by everyone, clean core extensions, and AI innovations built on SAP BTP — this is the championship roster.
I'll admit it. I wanted one star, and you gave me a team of stars.
Here’s the truth: this team doesn’t win on its own. It takes coaches, it takes vision, and it takes every one of you — the SAP developers around the world
You're the playmakers. You're the ones who use these tools to create real-world impact. You’ve had support all season long. TechEd is your training camp, but the game continues. Learning.sap.com, hands-on code jams, DevToberfest, the SAP community, and the SAP Developers YouTube channels — they are all here to support you throughout the year.
The team is ready. Now it's your move.
Step up, use the playbook we've given you, and build the future.