Palantir Technologies Inc. (PLTR)
NASDAQ: PLTR · Real-Time Price · USD
141.18
-1.92 (-1.34%)
At close: Apr 28, 2026, 4:00 PM EDT
140.90
-0.28 (-0.20%)
After-hours: Apr 28, 2026, 7:59 PM EDT
← View all transcripts

Product Launch

Jun 1, 2023

Operator

Please welcome from Palantir, Sasha Spivak.

Sasha Spivak
Head of Corporate Development, Palantir

Hi, everybody. Welcome to our first ever AIPCon. We are so excited to have more than 150 unique organizations represented in this room alone, standing in the back. We have dozens of other customers in local viewing locations all around the U.S. We have customers hosting their own watch parties. Of course, we have those of you joining us on the live stream. We are so honored and excited to unveil an overwhelming volume of new product today on the back of an announcement that Dr. Karp made just a few short weeks ago. Even more than that, we're excited to have our customers unveiling that themselves to you directly. We're continuously inspired by the pace of our customers, the ambition of our customers, and the energy that they bring to share their stories with you.

We hope that you take the stories you hear today, you're challenged by them, you're energized by them, and you run with them. With that, allow me to please introduce Dr. Karp and Matt Babin.

Matt Babin
Head of Energy and Natural Resources, Palantir

Thank you, Sasha, and thank you everyone for joining. It's my pleasure to open up the day again with a conversation with Dr. Karp.

Alex Karp
Co-Founder, CEO, and Director, Palantir

I'm only called Dr. Karp. Some of you guys who weren't here don't know this, when we started, we had no revenue, we had no product, we had no investors. We were almost going out of business. The only thing anyone believed in was that I was a doctor. That's— I kept trying to get rid of it, and then I realized, "Oh, it's we got to have something they believe in." That's why I'm still Dr. Karp, but you can call me any four-letter word that you say. I can say it in public.

Matt Babin
Head of Energy and Natural Resources, Palantir

Perfect. I'll shorten it to Karp as we go. I wanted to start by asking you about the letter that was in the news a lot this week by the Center for AI Safety, that said, paraphrasing, that mitigating the risk of extinction posed by AI technology should be treated as a global priority, the same as societal-level risks like pandemics or nuclear war. As the leader of a company that supported governments and private institutions through the pandemic, has been outspoken about the risks posed by the invasion of Ukraine on nuclear activity, how do you respond to that statement?

Alex Karp
Co-Founder, CEO, and Director, Palantir

Well, I think it's great to be here. Thank you for the light softball question. Maybe get someone else next year. We clearly picked the wrong person. We've been involved in building systems in the classified environment, and in the last five, six years, systems that are involved in identifying targets using AI. AI, in this context, meant finding targets in very large spaces, so spaces the size of Texas, find this kind of person, and then a handoff. You find target. People assume that when you find a target, that you would then automatically would just disappear. In fact, what really happens is it's very hard to find these. You need to be able to work on disparate kinds of data sets that are very large.

There's a handoff mechanism where it's like, is the target, is the identified object next to a hospital? Is it an identified object next to children? Is the identified object actually an asset of ours? Is the identified object, the idiot we want to not die? There's a lot of complicated things that involve it, that... what I think is, yes, these technologies are very dangerous, but our adversaries are even more dangerous, and that we, because of that, have no choice but to run headlong. What's interesting about a lot of these statements and what's going on in AI, which is mostly focused around the large language models, is that you really have different factions.

You have a faction that is saying that it's hyper dangerous because right now you need very, very specialized technology, which I believe we provide, to make large language models really, really important in your enterprise, so that you're not delivering a piece of poetry to your enterprise. Like, no one has time to deliver poetry. We need margins to change, safety to be better, generally understanding of our business, the knowledge of one part of our business to be transferred to another. These are things that large language models do exceedingly well with infrastructure that we'll be showing off today, and I don't want to take away from the things that are coming, but there's a lot of things we've built that will allow you to do that are very valuable.

There's one part of that faction that is saying that because they're right, and these things are very dangerous, but they're often right. It's like not also mentioning, if we don't build it, our adversaries will, and that won't be very good for us when we have no rule of law. I'm in a constant battle with my academic friends about this because they believe that in the absence of hard power, either being able to use AI now or nuclear, essentially nuclear, warheads, we would have a rule of law. We at Palantir believe that you need these weapons to enforce a rule of law, and that's our deeply held belief, and we've had that for 20 years, and it's cost us a lot.

It's cost us investors, 'cause we wouldn't work in China or in Russia. It cost us. Sometimes people didn't want to work at Palantir. You also have a coalition, and this is actually not related to people here, but, you know, in the defense establishment where we play a role. Quite frankly, there are very few providers of software that's useful. Everyone else wants to have a debate about how dangerous they are. Let's just debate it, and we can implement these things in five years. It's like, that we have to avoid. Yeah, again, my view, the view of our company, is very dangerous, very valuable.

One of the things I do really like about it, though, is that for commercial enterprises, it gives you the ability, if you're adaptive, to outmaneuver everyone else. The industries that are going to do that most effectively are largely gonna be in America, because American industry and its executives are just very, very pragmatic. If I can change the margins of my business, I can understand my business better, I can implement the cultural and knowledge advantages I have because I developed a way of building my business over the last 20 years better than anyone else in the world, I'm gonna do it tomorrow. That's just an enormous advantage, and that's quite frankly, why this audience is packed. I think that's just something we have to run towards, and that's what we're doing as a company.

Matt Babin
Head of Energy and Natural Resources, Palantir

You mentioned pragmatism and the implementation of software there, and I think maybe, you know, I started with a question around extinction because I'm a pessimistic person. If you're trained as an economist and you work as an intel officer, you spend a lot of your time thinking of how things can go wrong. I think software engineers, by and large, are optimistic people. You're gonna build something that doesn't exist for the first time.

Alex Karp
Co-Founder, CEO, and Director, Palantir

Yeah, well, they're optimists in the long term and pessimists when you tell them what they should do.

Matt Babin
Head of Energy and Natural Resources, Palantir

I was gonna say in transition, but yeah. Then maybe the synthesis of those two things, I think, is realism here at Palantir. The reason I've stayed here this long is we spend a lot of our time, I think, in that synthesis of taking the optimism of that software and putting it against the pessimism of these problems in the world. Where are you most excited about that combination and ambition?

Alex Karp
Co-Founder, CEO, and Director, Palantir

Software that allows you to identify adversaries at a high level. We constantly caution not to say much, except for that the Ukrainians use our software, changed the course of history, demonstratively. Software that, you know, we built 20 years ago changed the course of Europe because we stopped terror attacks while protecting civil liberties. Software that we're now rolling out will change the trajectory of the U.S. economy for, in a very positive direction. I'm not —there are certainly flaws in the West, but we are the best group of countries and structures that this world has ever known.

We have now technologies that will allow us to lurch forward, and we have cultural and training biases in this country that are tech-friendly, pragmatic, able to implement things, very high level of technical competence inside organizations built up over the last five, 10 years. Super willingness to bring the best talent in the world. One of the things about large language models that is just really cool is that for our partners, it's like many of these things, it's horrifically unfair, but it's going to make places that are already strong, pragmatic, have specific ways of doing business that are quite valuable. It's gonna allow those industries to lurch forward very, very, very quickly. We haven't, you know.

I've been at this for 20 years, without the products we built on, we built products for Intel, products for Special Forces, products on identifying adversaries, and none of them have the ability to transform a whole economy like this and put variability in the hands of very talented people, ethically, legally, safely. An interesting thing also is that this is more academic, but we've been in the trenches. It's like one of these things, no one ever believed we cared about civil liberties, but we made a ton of money caring about civil liberties because you have certain architectural. From a technical perspective, civil liberties means you need a branching structure so that you can segment who sees what.

You need access control so that you can verify who sees what. You need the ability to do that dynamically. You need an ability to impose what objects and mean to each other, so what we call an Ontology. You need ability to write against that, where you only write against the segment part of your data. Now, in the LLM context, you can think of that as just a way to process something that is moderately useful to very useful into something that is crazy valuable. Shyam is gonna show you this in this concept, this thing we built called Agents. All that processing, all those things about, like, academic data protection, they're really just like taking a unrefined product, moving it into a refined product, and making it deadly.

To see that something that looked philosophical became valuable in the anti-terror context, is now deadly in the AI context, and can help people transform their businesses very, very quickly within the context of how a business actually runs. You have proprietary knowledge, you have datasets, you have things that are actually regulated, you have things that you do not want to share, you have some things that you would expose to a large language model and other things you wouldn't. You have areas where the large language model needs tooling. You have areas where it doesn't need tooling. You have areas where you can accept 80% accuracy and areas where you can't. You have specialized knowledge in your business that even you find it hard to articulate.

Like, why is America and Silicon Valley and just in general, this culture so good at building enterprise software? I don't know how to articulate that. We, it's very, very hard to explain our selection and building products and it works. Getting that specialized know-how do you manufacture something very complicated that one company manufactures very well, but another company doesn't? How do you actually take all those insights and roll them against your business while being able to drill down into those insights so that you can make sure that the decisions being made are actually ones that you would support, not just ethically, but from a business perspective?

All that comes down to things that we've built that are actually in demand and that we don't have to convince people are valuable because they are just valuable. That is super cool.

Matt Babin
Head of Energy and Natural Resources, Palantir

Yep. When you think of that transformation of the software through those gates and delivering that value, you know, last month in Copenhagen, you were giving a talk, and I think you got a question which was: You know, is the West ahead of adversaries in technology like this? If so, how far ahead? How big is that lead? You mentioned in your answer that, yes, they're ahead, but the issue to worry about—

Alex Karp
Co-Founder, CEO, and Director, Palantir

We're ahead.

Matt Babin
Head of Energy and Natural Resources, Palantir

Isn't other people catching you from behind, it's Western governments not being able to move as quickly and efficiently as they can. You talked about budget appropriations and programmatic things. For this room and most of our audience today that's in the private sector, what do you think that acceleration looks like on the commercial side in maintaining?

Alex Karp
Co-Founder, CEO, and Director, Palantir

Again, I think what everyone in this room is gonna do and what I think people have already done, and one of the coolest things I've ever seen in the history of Palantir, is you go next door where these demos are, and we're showing off our product, and we have current partners showing other partners how you do this. It's , this thing about software that was always true is, it's all BS until you try it. It's like, you know, I was constantly asked: What makes Palantir different than these five other companies? I don't know. Go try all of us. It's like, it is, just try. Why don't you have a payment strategy? I know our partners, future partners, are smart enough to pay us a lot of money if we create a lot of value.

How do I know it will work? Well, we can show you it working, and we can show you how valuable it is. What it basically means in the government context, you have this problem globally, that 98% of software spend goes to people building things by hand that take five years and maybe then take 10 years, and then they take 20 years. Beyond that's just not how software is built at a world-class level. You can't have security if it's not a product. It's like, well, how are you... It's very hard to do these things, but it's definitely not gonna work. This is not just America, this is everywhere.

Then you have a lot of places, you know, one country doesn't want to buy from another country. Quite frankly, many of our allies have a problem that all the products come from America anyway, that work, so, like, how do you explain that? In a commercial context, it's just very, it's very different. It's like we have a problem, Palantir has a, you know, in the commercial context, multi-year, 10-year reputation for delivering very complicated systems that have worked across heterogeneous industries. Great. They're making some pretty bold claims. You're gonna say: Great, if those bold claims are true, we want to see it. Then you go and test it, and then we enter into a relationship after you've gotten value.

The only thing I would say is, this is kind of obvious, and but where it's not obvious, we as a culture have to make it even more obvious because on paper, AI sounds like it. It's very hard to know it will be valuable, and enterprises have very complicated internal technical challenges that are now being exposed. One of the very advantageous things for us is, AI will pen test your whole enterprise. You know, you can use our products but the stronger your enterprise, the more value you're gonna get. That's like—y ou're gonna see this, try it, get value, and then the next question is: Well, how do I get even more value? That's just a process that it's not theoretical.

It just has to be tried and proven. Maybe we should go to questions?

Matt Babin
Head of Energy and Natural Resources, Palantir

Yeah. I was gonna say, let's take some questions from the room.

Alex Karp
Co-Founder, CEO, and Director, Palantir

We could just talk about...

Matt Babin
Head of Energy and Natural Resources, Palantir

Extinction.

Alex Karp
Co-Founder, CEO, and Director, Palantir

There's always one person who has, like, five questions. That person should ask once. We have our long-term advocate here. Friend, ask a question. Yeah, you. I don't know. Want to ask a question? Okay.

Matt Babin
Head of Energy and Natural Resources, Palantir

I'll ask one more while.

Speaker 15

I will ask one.

Matt Babin
Head of Energy and Natural Resources, Palantir

Okay. It worked.

Speaker 15

I will ask one. I didn't know you were... looking directly at me, putting me on the spot. I mean, I'm representing one of these, but there's historically places in the world that have not had similar values, right, to the West or to the United States, and sometimes they're aligned, sometimes they're less aligned, right? Bringing something that delivers this kind of power and this kind of insight and efficacy into those markets, right, how does Palantir look like?

How does Palantir look at that in terms of, like you're saying, bringing some of these Western values and the benefits of the way we think in the West, give us this kind of advantage, how do you look at bringing some other places that might necessarily not always align with us, bringing those kinds of capabilities to bear, to actually help us in the long term, deliver the values that we really want everyone to share?

Alex Karp
Co-Founder, CEO, and Director, Palantir

Well, you know, it's interesting. I've been asked this question a lot, and... I'm kind of... I don't actually know the real answer now, because when you see what we're building with these Agents, it's really scary. It raises the threshold of where I— the Agent, basically, and Shyam will show you this, but it takes the output of an LLM, creates it in a hybrid algorithm, and allows you to run it passively, meaning all the time against your whole enterprise, and depending on the quality of your security, you can segment. It's still—y ou could see how that could be easily abused. And I do wonder if that should be sold to, like, local law enforcement. I don't know.

In the past, we've always been like, I'm not I'm in no way a neocon, honestly. I just think we have the West, we have the core West, and we have allies, and we have customers all over the globe, and if they're on our side, I think we should cut them slack, and if they're not... But slack doesn't mean everywhere, and we just have these ongoing discussions, and we've refused to work with lots of people, and it's cost us huge money, and I've been yelled at. Quite frankly, if I was fireable, I would have been fired many times over this. I still am. Everyone's fireable, but it's a little harder because the alternative to me is an engineer, and everyone's afraid even with our products, we'll have no revenue. It's really hard.

Like, wait till you see Shyam's demo. I'm sure we should sell this to U.S. industry. I'm sure we should encourage clandestine service, special operators, current clients, the U.S. military, the Five Eyes. I'm sure we should give it to them. I think we're gonna have to have long discussions about where else, because, you know, it's like you don't wanna have something that could You know, we've been in the business, actually, of protecting everyone's right to their own liberty, which also means your own lifestyle, your own secrets, your own personal proclivities that are yours, your health records, and I think that's one of the things that makes our society so special. There's a lot of new thought that's gonna have to go on about this.

In the near term, if you're a U.S. industry in Europe, clandestine services in the West, we'll definitely sell it to you, and we'll have to think hard about everyone else.

Matt Babin
Head of Energy and Natural Resources, Palantir

Other questions from the room? Yep, one right here in the front.

Speaker 16

There's no shortage of problems in healthcare, and now that you're kind of getting under the hood in healthcare, how do you see Foundry transforming that whole industry across the different hospitals, across the whole spectrum?

Alex Karp
Co-Founder, CEO, and Director, Palantir

Well, thanks to the great work of two people who are here and others, we power, I think, 13% of hospital beds in the U.S. Those use cases are super intricate. If you just look at Foundry, the use cases are optimization. How do you take scarce resources and disperse them in the most financially beneficial but also ethical way? You have a million hospital beds, you have 1.5 million patients. Who gets them? Under what conditions? That's both economic and philosophical, moral, implications and legal implications that you can deal with in Foundry. You also, in the hospital industry, and this is before you begin to integrate AI, you have highly trained workers, nurses, doctors, that need to be involved in the judgment chain.

How do you do that and then do it systematically is a classic Foundry problem. I think you're also going to see that given the way we can manage data in completely transparent way, and show the transforms, meaning how the data is joined in a transparent way, that that industry is gonna be ideal for Palantir and AI, simply because, certain optimization and knowledge-based transfer issues. Not everything a nurse understands or a doctor understands should be transferable to an AI, but some precursor decisions that can be evaluated should be, just in the same way, in other contexts, you identify and then have a handover to a human. That can only be done under the condition that you can actually see how that was done, even simply for legal, ethical, and reputational reasons.

The other reason why software is so important in that industry is it's margin-challenged. If you have a margin-challenged, meaning the industry or a resource-challenged industry, that is with complicated engineering and societal variables, that is something you need software for, and that's why our software's been so impactful during the COVID. I mean, we barely had software sales into the hospital industry a year ago. It's just going like that. You will see where you begin to integrate. It's going to be very hard for other people to get in. Our product's ideally suited for that, plus AI, because just purely, you know, one of the big concerns in the hospital is litigation. It's like, I brought this patient in. Okay, great.

Well, you're gonna get sued either way, but, like, you need to be able to show, well, this is exactly how the decision was made based on these data sets. That requires a branching architecture, ACLs, an ability to do this transparency, ability to show how the data was organized, meaning the transforms. If you're running AI on top of it, you're going to have to be able to unpack that in court, which we do natively. One of the valuable things for us, besides helping hospitals, which is incredibly valuable and gratifying, and my father's a doctor, so finally might get some respect at home.

Is that —it is a much more intricate, difficult use case than people realize outside that industry, and it highlights where it works and where it doesn't. We also have a vested interest because we can show off, okay, these are the very complicated issues that, by the way, you're also going to have if you're building an engine or drilling for oil and gas, or if you're doing pharmaceuticals, or if you are organizing, building water plants, or if you are doing really anything that involves the convergence of complicated engineering and regulation, which is basically every industry in this country. This one particularly highlights it, partly because of the challenges on the resources and partly because of ongoing litigation challenges.

Matt Babin
Head of Energy and Natural Resources, Palantir

Oh, we'll just repeat that question for everyone else.

Speaker 17

First of all, I'll try not to embarrass Ganesh, who's our representative. In your perspective, which is the biggest deciding person in the C-suite to avoid coming to this technology? CFO, Chief Innovation Officer, Chief Digital Officer, Chief Technology Officer, who is the belly button really putting through that across the line inside a culture that's not ready?

Matt Babin
Head of Energy and Natural Resources, Palantir

Just to repeat the question for those listening to the stream. The question was, putting aside the remark about Ganesh, a colleague of ours, who's the main person in the C-suite that you think is the—

Alex Karp
Co-Founder, CEO, and Director, Palantir

Well, no, I think the part of it is like, say, there's resistance in the org. The real answer is, it just depends on the org. You know, we have partners here where the CEO wants to get involved in that, in technical decisions. We have partners. We just need somebody who has the authority, if they see value, to push quickly. In some orgs, that's the CFO, actually, surprise, in some orgs, that's the CEO. Usually, it honestly ends up being a coalition of, like, somebody who's got credibility. By the way, sometimes it's not in the C-suite. I mean, I see this all the time.

We got our company off the ground because special operators went to the generals and said, "Yeah, you may not like that guy, but he's bringing us home safely." The generals were like, "Yeah, I don't like that guy, but if you bring him home safely, I will get buy your software." That's a classic challenge here. The AI thing has just shifted this, though. It's like a year ago, the answer would have been. Two years ago, it would have been CEO, or it doesn't work. A year ago, it was like mostly CEO. Last time we did this conference, I guess it was a couple of months ago, would have been like 70/30. Now, we have just a lot of very technical people in the technical part of organization saying, Okay, I'm going to do this.

I have these five highly technical questions, which largely, you know, you'll see it being answered, but come down to, you know, safety, security, understanding the limits of LLMs, understanding where they have to be augmented, those kind of questions. If you can answer those questions, and more importantly, show it, they're off to the races. It really, really depends. It also depends, geography to geography. One of the things that's made America very successful for us is more people can make these decisions. When we're in Europe, it really has had to come from the C-suite, and then we've had a lot of resistance. You also see even in government, there's huge differences. Special operations works different than Intel, which works different than military. In some of these cases, it's very disparate.

You can work with one part, and the other part you can't work with. Again, you know, we constantly get asked by experts, meaning, I probably shouldn't say this, but I could have built a business, I should have built a business, I could have gone to a hedge fund, but now I am an expert on software analysis for and I rate the software products and decide how valuable they are for Wall Street, right? They always want to know, how valuable, how big is the TAM? It's actually quite hard to calculate it for us, but one of the things that has made the TAM much bigger in the U.S. is more people can decide to say yes, much more..

In the AI context, it's really since we're on the frontier, and everyone knows we're on the frontier, it's one of the really cool things is it's the first time I've seen a market, maybe since we built our Intel product, where everybody knows we're on the frontier, and therefore, it's like, okay, there's no playbook. This really helps us because when there's a playbook, maybe we disagree with the playbook, which we often do. Like, software needs to do these four things to fit into our architecture, or we're not going to buy it. That's really slowed Foundry growth down because we actually have a different view. Our view is that it should be category-defining, meaning you should be able to answer these questions independent of what an expert says software should be like.

Someday you are going to have to ask and answer and write to your enterprise under difficult conditions, questions that you're not asking and answering tomorrow, and you're going to have to be able to do it in a way that's not just data science, but is algorithmic. We've always thought that. That was, we lost a lot of those battles. Even though Foundry, basically Foundry grew 67% last year, in a world that was more frontier, probably would have grown 3x of that. Now in the AI context, there really is no playbook of how it should look. People just want it to work, that's again, one of the main reasons we've had more inbound for Palantir in the last couple of weeks than we had all last year.

It's precisely because the playbook is out the door. Everyone knows this could be very valuable if implemented correctly. Everyone knows if it's not implemented correctly, you're gonna get some poetry and it's gonna be expensive. Yeah, that's very different.

Matt Babin
Head of Energy and Natural Resources, Palantir

Great. I think thank you all for the questions. I think this is a great introduction of what we're talking about here and what that frontier will look like. Any other closing remarks?

Alex Karp
Co-Founder, CEO, and Director, Palantir

Thank you for coming. For those of you new to Palantir, you've picked a great time to come. We've never had a vibe this good. It's been pretty good in the past, but it's not like this. You know, I hope you enjoy being here as much as I enjoy having you here, and hope to see you soon.

Matt Babin
Head of Energy and Natural Resources, Palantir

Thanks.

Operator

Please welcome from Palantir, Chief Technology Officer, Shyam Sankar.

Shyam Sankar
CTO, Palantir

Morning! I am so excited to be here to launch AIP, this product we've been working so hard on. You know, in my almost 20 years at Palantir, I've just never been excited about what we've been building to this level. It's not just the impact, we've worked on tons of impactful problems. It's really the speed that this impact is gonna be happening in here. AIP is really our core set of technologies that are designed to bring LLMs to your enterprise, to supercharge and accelerate your experiences, everything from integrating data, hydrating your Ontology, building AI-enabled applications, and even AI Agents and Copilots. AIP enables you to deploy LLMs anchored in your data on your private networks. It enables safe handoff between the tools in your enterprise, the actions, and other AI models you've already built.

It enables you to govern and control this, you ultimately have deep trust in the AI. This moment, I mean, the AI technology is going to be massively disruptive. It's absolutely a case where the dynamics here are winner take most, if not all. Be the winner by disrupting first. As I've been thinking about this, I've been thinking, where is the value likely to accrete? I think I believe it's gonna be in the workflow and the application layer, and that should be quite exciting for all of you here. That's because the models are already commodities. The pace here is accelerating and exhilarating. This slide, you know, I made it a few weeks ago. It's already out of date. FALCON has been released with an Apache license yesterday.

You know, we've gone in less than two months from LLaMA to Alpaca to Vicuna. We've gone from GPUs to CPUs. We've gone from massive cloud software stacks to running these models on a Raspberry Pi. The editorial from this, like, what does this mean? This means that you're likely gonna have a menagerie of models, not a single model, and you're gonna be able to cheaply customize and tune these models to your needs. It also means that the biggest, baddest model is not the one that's most likely to win. It's gonna be there's this Goldilocks of power to iteration speed. You know, what is the right sized model that you can iterate most quickly on? The reason that's the case is that LLMs alone are not enough. You need tools. Actually, they need tools.

For example, they need a tool that will help calculate the enterprise profitability of an account or compute the forward-looking inventory for a specific product. They need access to an action registry to execute operations across the enterprise. They need access to a semantic layer with rich objects and links to actually define the proper grounding and act as an anti-hallucinogen. Sometimes this is called retrieval augmented generation, or RAG. You probably have heard that term a few times now. It also needs access to a branching environment, something Dr. Karp was mentioning, where it can actually stage proposals or edits to the world for humans to actually review. AIP comes not only with an incredible set of tools, it comes with a tool factory, that you can build your own tools to augment them and make this maximally useful in the context of your enterprise.

How do you get an LLM to post an invoice to NetSuite? You need to build a tool for that. How do you reallocate inventory in SAP? That's what AIP really empowers here. The vision here is really this kind of cyborg enterprise, reimagining human and Agent teaming, that business processes are managed by an army of human Agent teams, that employees oversee the AI and the AI's recommendations. Your enterprise is moving at machine speed. Let's build that enterprise together, brick by brick. Let's start with data integration and really an entirely new paradigm for that. You know, when I was young, WYSIWYG meant what you see is what you get, and I think now it really means what you say is what you get.

You can move much more quickly building things by simply saying what it is that you want. Let's start by transforming some of these datasets that we see here. Very clearly, I can see what the AI does and does not have access to. In this case, it's just the metadata. I can select the datasets that I would like to transform, and I can simply say something like: Give me all the claims from the West Midlands. I can say that in English and German or even Ukrainian. What I get back is a series of transform boards that I can easily inspect and make sense of and see what is the AI doing.

I can ask more complicated questions like, "Give me the unique landlords with open cases and construct a geo point column that has latitude and longitude," I can visualize these items on a map and verify the location. If we fast forward a little bit, and we assume we have a more complicated-looking graph here, and I'm new to it, I can actually ask AIP to explain to me what are these transforms doing? When you think about how that's going to transform, how you do documentation, how you do change management, actually the comment in a code commit when you're changing your pipeline itself. Why do I need to say anything at all?

If I have my target Ontology here on the right, and I have the datasets I'm trying to integrate on the left, why can't I just select it all and ask AIP to connect the dots for me? What we will get back is a series of builder boards, which we can very clearly see with the purple annotation, with AI-generated content, that I can step through and verify as a human, that these make sense to me. AIP is your AI operating system. Every operating system has a command line. AIP's is Terminal. With Terminal, I can easily interact with all of my enterprise knowledge and my enterprise applications through a large language model. Let me show you that. We start off in our supply chain control tower, and we see that we have a new notification about a disruption to our Vellore Distribution Center.

This email has come in. Why don't we open up Terminal, load up our distribution centers, and start by simply saying, "I received this email." We'll paste in the content of the email, and then we'll ask to visualize the order statuses that are impacted by this context here. What you can very clearly see on the right-hand side is the handoff overview. I can see both the chain of thought and the chain of tools that AIP is employing to answer this question for me. It's interpretable, it's understandable. I get back the visualization. I select just those that are waiting for shipment, and I can drill down further into this. The relevant question I think now is maybe to figure out what's the revenue at risk here.

What's the total value of my high-priority orders that are impacted by this distribution center disruption? Again, I see through the handoff overview, how AIP is breaking down and answering these questions, how it's employing the tools that we talked about earlier to solve this for me. $13 million. I can see all of the objects I have access to here. You can kind of think about this as like LS at the command line. More importantly, I see all the tools I have access to. Not only the tools that are native to AIP, but also the tools that I've used the tool factory to custom build. Let's call one of those ML models now, that I would like to run an order reallocation model that is specific to my enterprise.

AIP will find it, bring it to me, allow me, as a human, to go execute that model. We get the results of that. This is not the way to look at it. What I want to do is I want to look at this in the context of my operational applications. I ask to open up my operational inventory allocation application. I can see I have 53 high-priority orders with $13 million at risk. I can see on the map the distribution centers I'll be drawing inventory from. On the right, I see the actual edits to my inventory plan that's being proposed to me. From right here, I can take this operational decision.

I have taken a problem that would have taken me 300 minutes to solve, solved it in 3 minutes, and with AI Agents, I will show you soon how you can solve this next time in 3 seconds. You don't want to live in the command line all day long. You want to be able to create on-the-rails workflows for your enterprise users, and you want it to fit them perfectly, and you want to be able to adapt those workflows to their evolving needs. You want to do that by simply just saying what it is that you want. How did we build that supply chain control tower application that we started Terminal with? Well, by asking for that application. If we start here, we provide a description.

I would like a supply chain application that allows me to look at manufacturing plants, distribution centers, customers, and some other metadata. This gets me 80% of what I want. I get the scaffolding of my product generated for me. Now I want to add some metrics to this. Let's look at OTIF. Let's look at monthly sales, deliveries, and I can get the widget created for me. Now, this is not an application, it's a dashboard. To the extent I have a supply chain problem, I can see it, but I can't do anything about it. The alchemy of AIP is to be able to change that simply with a single spell, to ask for a button in the header that will allow me to reallocate my inventory. Let's do that. We'll see the blue button appear in the top right.

How is that working? You know, it's obviously not magic. This is a concrete manifestation of a tool. If we go to the Ontology and we look at the management application, I can see I have this function, reallocate product, that takes in a target and source distribution center, product ID and amount to reallocate, and that it triggers, via a webhook, a call and write back to SAP. This again, is a concrete manifestation of a tool. In this case, a human is clicking the button to employ that tool, but this is a little bit of foreshadowing where we will see how we give this to an LLM to supercharge our enterprise. Okay, WYSIWYG applications. Is that what I want? I think what I want is a WYSIWYG enterprise.

I want to build an army of AI Agents that do what I say for me. I want my employees to become managers of Agents who review AI-generated recommendations. I don't want to be 100% better. I want to be 100 times better. Let me show you how. Let's build that together, brick by brick, starting with AIP Logic. Here I'm going to build a new function, and the input to this function are going to be these emails I get about my supply chain. I can add a logic block. It could be a Python logic block, it could be a Java logic block, but let's make this an LLM logic block. I will pass into this a prompt, which is both the content of the email, but also a task to read this email and extract locations from it.

I will define my output of this function. It's a string array of locations. It's called Locations. This is an integrated development environment. Now I can go run my function, create some mock data around a hurricane coming in Southern Florida, and test and see what is this function able to extract from me here. How does it work? Let's run it, and we'll see we get back Miami and Fort Lauderdale, and we can open up the debugger to actually step through and see the interaction between my code and the LLM and what are the prompts I'm sending, what am I getting back? How is that working? Great. This is not what I want. I don't want string locations. I want to know what distribution centers are going to be impacted by this storm.

To answer that question, we have to give the LLM its first tool, the ability to query the Ontology for our distribution centers, and a prompt of how to use this tool. Use this as your location data set. Then I need to change the output. I no longer want strings; I want object references. I want an array of distribution centers as the output value. It's strongly typed here. Let's make that change and rerun our function and see what we get back. As you'd expect, we get distribution centers. You can hover over them. You can see these are rich objects that are defined, the semantics of my Ontology here. Now the debugger is substantially more interesting.

You can actually see the call, and you can see how it's employing the tool of calling the Ontology service to go figure out what distribution centers to return as answers. You know, you can't make these up. This is Retrieval-Augmented Generation. I don't think this is what I want either. What I really want is the LLM to help me figure out what to do about the inventory shortages that are caused by this disruption. Let's enable that by adding another LLM block here. We're chaining our logic together. I'm going to pass in the email, the locations from above, and a new prompt to use these tools I'm about to provide to determine how to resolve the shortage. We'll go over three tools. The first is reallocate product.

We saw this tool earlier in the supply chain application, a prompt of how to use this tool. In addition to that tool, we're going to give it two more tools to calculate the shortage KPI and to get reallocable inventory. I want to change the output type as well, though, because what I want back is a scenario. I want an AI recommendation of edits to my inventory plan that I could consider making. When we run this, we will get back a rich set of edits to the inventory plan, a scenario that I can evaluate. The debugger is substantially more interesting now.

The first block here shows us where it's getting the distribution centers. The second block shows us repeated calls, where it's how the LLM is invoking different tools, the return values of each one of those calls, the chaining of these calls to ultimately arrive at the solution and to provide me this case. I can save that as a test. This is a unit test. I have expected input, expected output. Now I have an ability to regress over this model. We also have the full telemetry. If you allow me a cooking show moment, say, This function has run in production, you know, 5,000 times or so, I can see every production run of it. I have the full trace of the debug log.

I can, for example, select just the times that we called the reallocate product tool. I can select a single run of that and see what happened. Walk through the trace. I could save this as its own unit test. This is how you build fundamental trust in these LLM functions. It's how you ensure quality as you upgrade your model or model versions change over time. The final part of this, though, is to turn that logic into an AI Agent. Let's start by defining when I want my Agent to run: on a schedule or when a new email object is created, which would be the case, select the object type, and I connect it to the email alert categorizer we just built. I set up who has access to even approve the AI-generated scenarios here.

When we put this into production, we can go to a live view, where we actually watch as the human- agent team, the Agents processing objects as they come in. I have full access to the chain of thought and the chain of tools that the Agents are using. I can actually view the suggested AI edits in context of the operational applications to make well-informed decisions. I can switch to a view to see and manage my Agents overall. The process view of how many Agents do I have running? What processes are they working on? What is the historical performance of these Agents? There you have it. You know, brick by brick, we built this cyborg enterprise with AIP. What you say is what you get. I think LLMs are going to change every aspect of user experience in software.

In addition to sharing what AIP is today, I also wanted to share a little bit of the sorts of things we're working on, how we're thinking about user experience in the future. You know, we have started to experiment with cutting-edge approaches to UX that deeply integrate LLMs, Ontology, and Agents into your AI operating system. Let me show you some of those concepts. We start here, and on the left, we'll have a panel that has the Ontology, Agents, and logics directly accessible. On the right, we have Leo, our AI Copilot, that not only allows us to ask questions, but more importantly, wields the applications itself. I want to create a new work plan. Leo tells me the first thing to do is data integration. It opens up the pipeline builder application for me.

Through the Ontology I can directly interact with, on the left, I can see the objects and the relationships in the context of the apps here. I can ask Leo to bring the new data to bear, both from the Ontology side and the datasets that I need to integrate, and use features that you've already seen to, for example, select this and ask AIP to connect the dots. I very clearly can see what content is AI-generated, and I have the explainability. I can step through, you know, step by step and understand how this problem was broken down. That trust is a core part of the concepts around human-agent teaming here. I want to deploy this. It will open up a branch, a proposal. It'll allow me to submit that, to compare the diffs between the old and new pipeline.

I'm getting workflow assistance from the Agent now. Once I push this to production, it will move me to an operational view, where I can actually drag my Agent to the business enterprise Kanban process to start the automation. Through Leo, I have full access to see what does the Agent see, but on top of that, through what you say is what you get, I can build entire new widgets to help me, in this context of human Agent teaming, see more and make better decisions. At a glance, I can see what data the Agents do and do not have access to. I can view the guardrails and the rules and security around any one of these Agents. I can also view that in the context of the operational applications themselves.

I think every UI is going to completely change. This is the integration layer. I think, you know, I'm very excited about this. I think this is the moment that we should be wondering, why should we not be even more impatient? You know, how do we make decisions even faster? How do we think bigger about the potential of transformation of these technologies? We will have booths set up throughout the Shire to show you more of this stuff. You can get your hands on it. You can play with it. We would love to ideate with you. Thank you all so much for joining us.

Operator

Please welcome Senior Director Engineering, Government Solutions at Cisco, Mike Younkers.

Mike Younkers
Senior Director Engineering of Government Solutions, Cisco

It's hard not to want to run up there. I always watch people do that, and I'm like, "I don't want to run on stage." I appreciate the opportunity to be with you today to explain what we're doing at the company I work with Palantir. As was stated, my name is Mike Younkerss. I focus on government solutions inside of a company called Cisco. That's Cisco with a C, the networking company, not the food company, in case there's any confusion about that. I've spent 20 or so years focused on the U.S. federal government at Cisco in a presale systems engineering role, and about a year ago, I shifted into an IT operations role inside of Cisco, supporting that customer.

Today, I want to share with you... I'm going to try to thread a needle here, depending on where you sit in the audience, if you support the federal government or deal with regulations based on the U.S. federal government, you'll understand kind of the lens that I'm applying to this. If you support global governments or operate in other regulated industries, I think this will resonate with you as well. The needle I'm going to try to thread is to kind of talk about some different problem sets that we're faced with at Cisco and how we're solving them. To be crystal clear, it was only a few months ago that I sat in the audience where you are now, and I kept hearing about Foundry and the Ontology, and it turns out I'm an Apollo customer of Cisco, so, I mean, of Palantir.

I just want to be clear about that up front as well. I was at FoundryCon, thinking and, you know, trying to figure out how to adopt Apollo, and here I am at AIPCon to talk about Apollo, but I just want to be clear about the problems we're solving. After plenty of years and a lot of money invested from Cisco and me, I can stand before you today and say, my purpose in life is service. I work for a company whose purpose is to create an inclusive future for all. That's what we're doing at Cisco. It is our stated intention to create an inclusive future for all. I think that's really cool. I love these big, aspirational goals.

I think they're really important to have because it's how you rally troops and how you go take on big problem sets. I've learned over time, I've always heard this statement, an existential threat, and I've never understood it until I really started thinking about the problem sets that we're facing, myself and my team at Cisco. What I mean in this context is, if I am trying to provide service to the customers that I support, which in the context of this discussion is the U.S. federal government, and if Cisco is trying to build an inclusive future for all, but if we can't deliver our capabilities to our customers so that they can serve their customers in the environments that they need it in, that is a true existential threat.

I supported a team of a couple hundred engineers around the world supporting global governments. The idea is that we weren't going to be able to help our customers solve their problems. Ultimately, if you play that out, means that we as a team don't need to exist. That's an existential threat. I'd heard this statement all along, like, you know, people say this, they're like: Oh, we're facing this existential threat. I had no idea what that meant until I started looking across my team and realizing I don't ever want to have to lay off people ever again in my lifetime. To me, very viscerally, it was this, you know, kind of, what can I do? What sorts of problems can I solve inside of Cisco?

If you extend that and think about if my company, if our entire purpose is to create this inclusive future for all, if we can't reach all, then we can't actually realize our own vision. We found a way to solve this problem. Surprise, it's with Palantir. But let me kind of build this out a little bit, and this is where I'm gonna try to thread this needle. If you look at this picture, the kind of logos in the middle, if you're Cisco customers at all, you might recognize Webex, you might recognize Meraki. The cloud is our CX Cloud, which is something that we're doing to try to provide better capabilities and services to our customers. You may or may not be familiar with that.

At the bottom is meant to represent the kinda global nature of the company that I work with. The government services and solutions that I'm now responsible for providing inside of Cisco are global in nature. They're not just focused on the U.S., but the U.S. government is where my background is, so that tends to be where I kinda talk through these use cases. If you think about this, there's kinda Europe on the left. Think about things like GDPR or data sovereignty types of problems.

You know, there are issues there that need to be solved for that are challenging if you're a company like mine who wants to provide cloud-based services to our customers in Europe, but we're a U.S.-based company, and if our data resides in the U.S., then that becomes a problem for our customers, right? In Europe. In the middle is kinda North America. I can bore you to tears with U.S. government regulations, be they actual federal government regulations as it applies to the government itself, or the regulations that they push forward into these other industries that we all have to deal with. On the right is Australia, and I put that there because part of my heart is in Australia right now. My youngest son is doing a study abroad in Australia.

The point is, we have actual legitimate use cases that we're trying to solve with the Australian government, which kind of proves the point that I'm trying to make here. If you think about this, multiple cloud offers, I'm just representing three, of which we have a bunch inside of Cisco in this picture, as represented by Webex, Meraki, and CX Cloud, going to multiple geographies, and then inside of those geographies, multiple sets of regulatory requirements that we have to meet along the way. This is an incredibly complex problem, and the question over here was asked about at what level of the company do you know, have to be engaged to kind of push these things forward?

I'm a few steps removed from the C-suite inside of my company, but I can tell you, this is a problem that the C-suite of the company that I'm working for is trying to solve. How do we drive this inclusive future for all, meet the demands and needs of all of these customers in this world of sort of economic nationalism? That's another one of my favorite phrases that I never understood until I started having to live it. you know, how are we solving these, right? Because the idea, the whole promise of cloud is I can build something where I want to build it and run it on behalf of customers, and the customers don't have to deal with that problem set anymore. If we can't meet the requirements of where we want to take these solutions, then we have a real problem.

One of the ways we could meet these solutions, and maybe some of you guys have tried this, is just build separate environments everywhere you want to go. Let's take a solution. We'll pick on Cisco Webex, and let's think about this. If I wanted to build Webex and meet the United States government, just as itself a customer, it has five different sets of requirements when you look at cloud requirements to meet the United States government. They're classified for the Intelligence Community, the Department of Defense. There's unclassified for the Department of Defense, but not all unclassified is the same inside the context of the United States government. There's truly, like, sort of civilian services. In the U.S., just supporting the government, that's three different accrediting bodies.

That's GSA doing FedRAMP, that's DISA doing all the DoD certification stuff. That's the Intelligence Community doing its own certification stuff. If you're interested in this kind of stuff, I'm not gonna dwell on these, but if you're interested in this kind of stuff, Google the blog that Palantir wrote about how they solved for DoD Impact Level 6, or IL6. It's fascinating reading and really kind of reinforces the point I'm making here today. The point is, we had to figure out a way inside Cisco to separate product development from product deployment. There's not enough time and not enough money in the world, even at a big company like mine, to go build bespoke solutions to the order of five, just to meet the United States federal government requirements.

I haven't even touched private industry inside the U.S., and then you extend that globally? There's not enough time, and there's not enough money, and there's not enough talent to go do that. I've spent the last six years ranting and raving about DevOps and the notion of crashing together your development team and your operations team. It's awesome. It's culture change. It's hard to get done, but it's really, really cool, and it's really, really powerful when you can pull it off. The problem is, you can't have DevOps teams and meet those global requirements if you don't separate your development from operations. My head starts hurting, and I'm like: Wait a second. I've been trying to bring these things together, and now somehow I have to separate them again?

I'm gonna lose all the goodness of what we were trying to solve for when we brought DevOps together. We can't have that. The problem we were trying to solve was logically, logistically, physically, to meet requirements, like certain citizens with certain data living in certain places, like, those are hard requirements that are given to us globally. How do we solve for those in operating these solutions, but how do we get the goodness of DevOps? The first thing we were trying to figure out was, in a, in a physical way, keep development from operations separate. The notion was, okay, but one of the cool things about cloud and DevOps is we can build it once and deploy it many. We're faced with this, like, oh, my gosh!

Like, we spent all this time crashing together DevOps, now we're gonna re-separate them, but we still wanna build something once and deploy it and solve all these problems. The solution is Apollo, and that's why I'm here today, because we work with Palantir. We use their platform called Apollo, and we solve those two problems that I just described to meet our customer demand. This is an interesting kind of picture. It took us forever to figure out how to try to represent the complexity of what we're dealing with in a very simple graphic. The point here is, at the top, I'm still picking on Webex, because our executive vice president of engineering, security, and collaboration, we're taking Webex, and we're driving through this. We're providing Webex to meet these requirements on a global government basis.

The little picture there is this thing called a monitor. The monitor is what you build when you're building out Apollo. It drags along through Apollo. Control Hub is where we register all this stuff, and if we were in a perfect world, and everything were containered, and it were all registered properly through Kubernetes, and we had all our Helm charts all straightened out, life would be easy. Life is not easy, right? We're all practical people. That's not the world we live in. We do some interesting things in the Control Hub. Then we get this notion of a very hard line. This is where I start to separate if I'm doing classified and unclassified work, if I'm trying to meet localized requirements in the country of Germany, for example. I can't have people back here operating those environments.

It's a very hard line between the Control Hub and then these remote hubs. Now I can meet customer requirements in the remote hubs, but I have this interesting problem, this clock that sits in the middle. I have different accrediting bodies with different sets of requirements before I'm allowed to push stuff remotely. Again, back to my United States government example, I gotta go deal with GSA, I gotta deal with DISA, and I have to deal with the Intelligence Committee. All have different requirements, all have different timelines, all have different things they wanna see before I'm able to move things. Apollo gives us this capability, and the cool thing is, where I'm in a company where we're figuring this out on our own, we're partnering with a company in Palantir that's already solved these problems.

I don't have to go and roll my own and figure this stuff out the hard way. I can leverage Apollo and do this interesting thing where I can let the development teams inside of Cisco that are not my responsibility, but I work with very closely, they can do what they do best. They can do their development wherever they are. From a global point of view, they can drop off features. Some of our BUs, the business units, deliver stuff on the order of, like, tens to hundreds of new software drops a day. Let me tell you, if you try to push that inside of a regulated environment, it makes people's head explode when you say, "I wanna bring 10 software updates every single day, including weekends, across this boundary." That hard line, the accreditors are like: "No, try again.

That's not gonna happen." This is what we can do with Apollo. We've separated development from deployment of the environments, then we allow the product development teams to go do what they do best. Then we build service or Site Reliability Engineering teams, or SRE teams, to actually go run in these environments that can meet the local requirements of whatever environment is we're going to. Citizenship, clearances, if they need it, and, you know, we can kind of meet the requirements along the way. We end up in this really cool, and I wanna say magical place. I am a huge fan of Simon Sinek. If the engineers and architects who I get to work with every day were here, I like to claim I'm an engineer.

I have a double E degree and a computer science degree, but the people I work with every day will tell you I've been a manager and a leader for so long, it freaks them out when I touch keyboards. They actually call me, in the context of SRE, they call me Chaos Monkey because I break things in ways they didn't expect. I think that's good. They don't see it that way. Why am I here? Simon Sinek is one of my favorite authors when it comes to leadership, and Simon Sinek says, "Always start with the why," and I'm breaking that rule 'cause I'm ending with the why, but I wanted to share this with you 'cause I had this opportunity.

More than anything, I want to express gratitude for the Palantir team that I get to work with and the Palantir capabilities that we're bringing inside of Cisco. Because we are solving a real hard problem that I'm convinced is an existential threat to me as an individual and my company as a company. I was sitting right there in the back row, 'cause I'm a back row kind of guy, right in the middle of this audience at the last conference, and I took precisely one note the entire time I was here, and that note is this quote. I brought this with you. Isn't it cool when you have a prop? That note was, "Iterate with a partner you trust." To me, that's what this is about.

I'm headed into uncharted territories in a company that is as massive as the company that I work with, and I wanna work with someone who understands, not necessarily has solved all these problems, but at least understands the problems that we're faced with. We got a fighting chance to iterate through this thing and go figure out how to solve for that existential threat. My quote, the thing that brought me back here today and why I'm so happy to be working with Palantir and Apollo as a platform, is exactly this: I'm iterating with a partner I trust. More than anything, I just wanna express my gratitude to the Palantir team for everything that they've done to support us. If you have any questions about what we're doing inside of Cisco and how we're using Apollo, I'll be around all day.

I'd love to engage with people. Thank you for your time and attention.

Operator

Please welcome Chief Digital and Technology Officer at J.D. Power, Bernardo Rodriguez.

Bernardo Rodriguez
Chief Digital and Technology Officer, J.D. Power

Hello. This works? Awesome. I'm gonna give you a little bit of show and tell of the things that we're doing with AIP and LMS. Before that, a little introduction. I think you know J.D. Power because we give awards to cars, we do benchmarking, we talk to a bunch of millions of customers, of mostly OEMs. Then we give awards. You've probably seen commercials on maybe the Super Bowl, let's say the SUV of the year or the best experience in a luxury car, that's us. But in reality, that's a very small part of our business. Most of our business is around data analytics. What we've done over the last few years is to, you know, build and aggregate data sets that, in a way, define the inner workings of the auto industry.

We which is a $1 trillion industry. This is a big, big industry. We understand what cars can be built, what cars are actually built, where are they sent, what cars are sitting in parking lots in the dealers, when are they sold, the price that they're sold at, what incentives are applied, demographics of who buys those cars, the problems the cars have when they're repaired, warranties, et cetera, et cetera. We have a complete view of the car industry. We spent, you know, many years doing things with these data sets. We do some machine learning models, very domain-specific models, traditional models. We have decided, we decided about a year ago that we're gonna get into AI with more conviction. Particularly this year, we've been driving to become an AI-first company.

We're happy to partner with Palantir. I think the last, the last slide about iterating with our part of your trust is very relevant to us because we've been learning a lot over the last five, six months. We actually been working together for about five or six months. Why now? Let's see if this works. Okay, great. What we've seen over the last few months is that even my mother asked me about ChatGPT, correct? There's a big step in innovation in AI, correct? Things are becoming, in a way, commoditized. If you go to sites like Hugging Face, there are hundreds of thousands of open source models there to be leveraged. Many LLMs, some regression models, classification models, some models have been downloaded tens of millions of times.

Now the secret sauce of AI is not on the models that you can build, but how do you apply them to the data that you have? As you will see, how you deploy them into applications that can actually move the needle. And again, we have consolidated this data set. We have bought many companies over the last four years, that because we wanted those data sets to come into the fold, correct? We bought companies that have inventory data, for example. Six months ago, we bought a company that had data around EVs.

Now we have all this data, and we say, "Awesome, with all this data, and then with AI, with LLMs, what can we do?" As we look at the industry, a trillion-dollar industry, it becomes a very exciting and, you know, open field opportunity. Our job with Palantir is not to necessarily focus on efficiencies internally, but actually build solutions that move the needle in the auto industry. Those are our main clients, correct? It's a trillion-dollar industry. Any problem that we solve is a big problem to be solved. You can think about problems like when you buy a car, correct, and you try to figure out what is the car that you want, that you need, where can I find it? What is the right price? That's, for a consumer, a big problem.

Dealers have to figure out, what cars should I order? How should I price them, correct? OEMs have a lot of challenges and opportunities ahead, right? What cars should I design? Which ones were actually built? Where should I send them in the U.S.? What should be the right price, the incentives? Are my cars breaking? What are the problems in repairs? In repairs, for example, in warranty, there's about a $7 billion spend today in the auto industry just to manage warranties. Can we move the needle on a $7 billion line, for example? What we're trying to figure out, and we're trying to experiment with Palantir, is how can we use AIP? Again, when I say we're trying, we started four or five weeks ago with the launch, correct?

It's impressive how much we have done. I'll show you a little bit where we are. What type of problems can AI help us solve, particularly LLMs? Let's start with the problem statement that I started at the beginning, correct? I wanna buy a car, and I want an AI to help me buy a car. The first you can think about is something like this, correct? "I want to buy my first plug-in hybrid. I currently have a small SUV, and I think it's the right car for me and my family. What models can you recommend to me?" You can think also, I love AI. I'm going to go to ChatGPT and ask him this question, correct?

Unfortunately, if you go to ChatGPT and you ask them, "Hey, when were you last trained?" It will tell you that it was trained in September 2021. How can a machine, an AI, that was trained so long ago, help me buy a car now? If you look at the industry, there are a lot of things that have happened in those 600 days before, until, from the time that ChatGPT was trained, correct? 600 new models were launched in the industry, 1.5 different car configurations were made, 1.2 million cars in inventory are in each day in the dealers, but 1.1 are sold on the average, and that's got actually getting now higher than we're getting away from COVID. There's about $2.4 billion a month spent in incentive.

How can an AI that was trained 600 days ago help you understand what is the right choice for you now, correct? We're trying to figure all that. What we're doing now with LLMs, what we've done for the last few weeks, is try to determine how can we take the best that an LLM can offer and map it with the data that we have that is real-time, high-quality data. I'm gonna show you. I'm sorry. We asked ChatGPT for the question that I showed you about the plug-in hybrid, and they gave me some models that were actually 2021 models, obviously, because it was trained that way.

At the end, it said, you know, "Additionally, checking with local dealerships or visiting manufacturer websites will provide you with the most up-to-date information on the latest plug-in hybrid models available." Great, because that's the only thing you can say, correct? Can we do better with that data? I'm gonna show you a demo. Let's not start it yet. A couple of things before we start, okay? One, this is not a consumer-facing application. This is an application that we're building for us to prepare and build that consumer-facing application. That's gonna be in our site, and it's gonna be in the site to some of the OEMs, correct? What we're learning now is the interface, but most importantly, how do we tell the LLM how to, quote-unquote, behave? How do we put all these things together, correct?

I'll give you some of the lessons that we have learned over the last few weeks. Second, what is important to us is to understand that the dialogue that a person has with an LLM can go in many, many different ways and can go very deep or very shallow, correct? I'm gonna show you here just a couple of questions that a user might have, but then we can talk about what's happening on the background and how that conversation can be extended to many other use cases. Let's start the demo. Cool. We start a session. We call this session, Family Car. The user types, basically the question that I talked to you about in the first slide, correct? Which is, "I wanna buy a hybrid. Can you help me out?" Correct.

This goes to an LLM. The LLM takes and understand what the question is about, and then the LLM says, "Okay, I'm gonna look at J.D. Power's data, and I'm going to find the best answer for this question." Correct? The LLM takes that and gives that to the application and says, "Okay, these are the cars that I found on the J.D. Power data that I think you will appreciate, or that would be good for you." There's an additional question: "Hey, I like this feature that keeps me in the same lane." This is customer language, not technical language. Actually, I don't wanna spend that much money."

The LLM takes that, tries to understand the message, go back to the J.D. Power data and say, "Okay, here you go. Here are three potential models that you can take a look at. By the way, here are the trims, which are specific trims for those models. The customer can select a couple of things that they like, then we tell the LLM, "Awesome. Here are the two things that I want you to compare, LLM," and he goes and say, "Here you go." Those things that are written there, if you look at it, are basically the reasons why the LLM recommends one or the other. This text is written by the LLM, correct? You see things like, "Okay, I got you this lane keeping assist. It's great for your family. On the other side, there's a bunch of features that you might like, but it's pretty expensive.

You told me you need to watch your budget," et cetera, et cetera. We tell the LLM, "Please give me a list of the features that you think are relevant on these two cars." Correct? Let's find a car. LLM, help us find a car. LLM goes to inventories and shows you where the cars are, and then just you select the one that you're gonna go. This one is a bunch of miles away, but apparently, potentially is the best car you have available, and then there you go. Correct, you find your car, et cetera. Again, short conversation, but just keep in mind that the LLM is driving all this, and I'm gonna give you a quick look at what does it mean. I told you, this is an internal application.

As you train the LLM, you might tell them things like, for example, "Great, but when you show a car, also show the model year." Or you could say, "Also show the payment, the monthly payment," because people don't understand MSRP very well. Show the monthly payment. Okay, what's happening in the background? Very quickly, I don't know if you guys work in Foundry or not, but basically, here we have the yellow, which is the interface with the, with the, with the customer, where the chat happens. The green things are what makes this possible, the interaction between the LLM on all the parts of the applications, the Ontology is in the middle. This is where our data is, the data that we think is relevant for this use case, and then the applications on the other side, correct?

The beauty of having Foundry and AIP is that you can build this really quickly. Really quickly. We build this in a matter of days and iterate in a couple of weeks. The cool thing is then, obviously, we're connecting all these systems, but what about the LLM? How do we tell the LLM how to, quote-unquote, behave? You do that through prompt engineering, That also AIP allows you to build that into the system. This is actually what we tell the LLM to behave like, to do. You are an AI assistant that provides guidance on someone looking to buy a vehicle. This is an actual instruction. You are friendly and helpful. This is an actual instruction. It's kind of crazy, huh? Et cetera, et cetera. That's the first one.

The second one is, use only the information on vehicles from JATO, which is the J.D. Power, basically, data, correct? Make inferences about what the customer may want based on your query. This is actually what we typed as a prompt to build the application. On the last one, we're telling Open ChatGPT, we're telling them, when somebody wants to compare vehicles, don't go crazy and go back to 2021 and find some stuff. Don't do that. Just focus on the J.D. Power data, correct? There are more things that are happening here. We are telling them, for example, they don't know these. If you can get, we will tell, we will tell the LLM, if you can get a zip code from the—a sk for a zip code, and you can check how much snow falls in that zip code.

If there's a lot of snow, think about all-wheel drive and the clearance of the vehicle. If you have a zip code, you can know the price of electricity and the price of gas, and you can give me a total cost of ownership comparison between a plug-in hybrid and non. Miles, for example, you drive a day, et cetera, et cetera. It's very compelling. The beauty of the system, again, is that on the top, you see some of the stuff that we're doing in Palantir. We have several applications. Some of the applications are out in our clients already. We basically have data flowing all the way to an Ontology and then applications feeding that Ontology.

One of those applications, for example, it's an application around repair analytics to understand warranty costs and optimize warranty costs. What we're doing now is an application using LLMs that will take verbatims from repair. Every time a car is repaired, the technician in the dealer writes, "This is what the customer told me, and this is what I found and how I repaired it." You have hundreds of thousands of those, millions of those. If you're a system engineer trying to focus on how to minimize warranty, you want to understand root cause. Why is this failing? At what speeds? At what temperature? What are the behaviors? We build an LLM, that an application that is powered by LLM, that will allow us to do this. This is really real.

I mean, we present this to a client in two weeks. If we get it right, once we're convinced that it's right, we'll deploy as an application to our clients. Few things are really critical for us, like the slide that I previously talked about is speed. We need to be able to do this fast. We need to be able to not only take things to market pretty fast, learn really fast. The iteration cycles here are not measured in months or weeks or days. This is about every hour, we're learning things about how do we train LLMs and how do we understand how they behave and how to leverage their insights, et cetera.

Extensibility, we need to connect every single application, every single model that we have in our platform to an LLM-driven experience. The last two are completely fundamental for us. Our data is our IT, is the value of our company. The data that our customers put in our systems is absolutely critical. This has to be bulletproof. It has to be very secure, and privacy and governance are really a must for us. Finally, the road ahead. The first one, we have done this for a few years. We bring data, analytics, and we do domain-specific models. Now we're leveraging LLMs to integrate into GDPR data and also bring all the models to provide new experiences. We've been working on that, and we're actually releasing applications in a couple of weeks.

Ultimately, what we think in the challenge is, how do we build intelligence that actually understands the auto industry and can tackle fundamental questions? That is combining LLMs with domain-specific models, and we are even arguing, should we train our own LLM to be able to be more nuanced about the discussions that we have in the auto industry? Running out of time, I want to thank the Palantir team, great partners, you know, Taylor, Suzanne, Jan, all those guys who've done amazing work. A lot of respect and appreciation for the work and looking forward to the next few months. Thanks a lot.

Operator

Please welcome Chief Digital Officer at Cleveland Clinic, Rohit Chandra.

Rohit Chandra
Chief Digital Officer, Cleveland Clinic

That was pretty cool. We know where we start shopping for a new car next. My name is Rohit Chandra. I'm the Chief Digital Officer at the Cleveland Clinic. I'm gonna tell you a little bit about the work we've been doing with Palantir for the last couple of years. A little bit of background on the Cleveland Clinic. We're a large healthcare system, our work is organized around four key stakeholders. First and foremost, our patients. There's caregivers, there's the organization, the communities in which we operate. From a technology perspective, the approach that we focus on is to make sure that we're working backwards from providing value to one or more of these key stakeholders.

In addition to striving to provide the best healthcare possible, we have a strategic imperative to touch as many lives as possible and take care of as many patients as possible. That's what drives the work that we're trying to do with Palantir, which is how do we improve hospital operations, throughput, and efficiency, and serve as many patients as possible? Talking a little bit about. I'm sorry, can you go back one slide? Yeah, thank you. Just to give a little bit of flavor for what it takes to operate a hospital, many of you who in the audience who are in healthcare will be familiar with this, but just for some backdrop for the Cleveland Clinic, we operate more than 20 hospitals, more than 200 outpatient locations.

We have about 300 operating rooms, more than 6,000 beds. We employ more than 6,000 physicians and more than 15,000 nurses. We need to coordinate the work of all of these caregivers for a daily activity volume of more than 25,000 outpatient visits, more than 1,000 surgeries, and more than 30,000 pharmacy orders. Orchestrating all of this activity so that it functions like clockwork is not easy. All of the work in terms of scheduling physicians, nurses, supplies in an operating room, making sure you have an ICU bed, a step-down bed, all of these things are complex to orchestrate. Like all organizations, we're a 100-year-old organization. We have grown organically, we have grown through mergers, and a lot of this orchestration is done mostly manual, with light assist from technology.

What we're striving to do here is: How can we lean into automation so that all of this coordination is mostly automated and assisted as much as possible with technology? When we looked at the products in the marketplace, we felt that they didn't really go deep enough in terms of tackling these problems. We actually went out, and we chose to work with Palantir because we were looking for a technology partner, not somebody who was a domain expert willing to sell us a product. What has been unique about Palantir over the last couple of years is, Palantir is not just willing, but is actually thrives on solving the hardest problems, as opposed to selling us an existing solution.

This translates into not just willingness, but eagerness to sit side by side with us, spend the time understanding healthcare, understanding our needs, understanding our process, understanding our culture, and then developing solutions from the ground up and literally from first principles. How did we approach this? There's a crawl, walk, run approach, and let me talk about sort of the first step. As in any complex enterprise that has existed for a while, most of the operational information for the enterprise is littered across a variety of archaic systems. Just pulling that information in into a single coherent view, that you can actually reason about the product and business concepts, is not to be underestimated. That's sort of the first step that we've taken with Palantir to say: Can we bring the information together?

Can we construct the views that at least give us intelligent visibility into what is going on, so that we can actually take actions, make decisions, and drive the enterprise forward? I'll touch a little bit later on the things that we're trying to do to go from the crawl to the walk, to the run stage. I'll touch on two areas. The first is capacity management, and this is a screenshot of the Hospital 360 module within Palantir, which gives us a view into bed occupancy across the enterprise. I know that the screenshot may be a little bit hard to read, but we can now see in a single place all of the beds, the occupancy. We can double-click, and we can see the beds by unit, by facility.

On the left-hand side, we can see all of the inflow of patients, and patients can come in from a variety of different sources. We can have patients that are undergoing surgeries and will come in from an operating room. We have patients that we will admit from an emergency department that are direct admits, and then there are hospital transfers, where patients may need care that may only be available at one of our facilities. On the right-hand side, you can actually see the output overview, which is all of the discharges that are planned from the hospital, so that you can actually have a view into bed availability across the enterprise. Previously, this information was really hard to get at, so when you were faced with trying to see: Do you have the room to admit a patient?

Do you have the room to accept a hospital transfer? It just wasn't possible. The other thing that we were leveraging Palantir in this one, in this view, is it's not only giving you a current snapshot, it's also forecasting or predicting which are the discharges that are gonna happen later in the day. Instead of discovering at 8:00 P.M. that you were able to discharge a patient and you actually have a bed, you can actually predict which patients are likely to be discharged earlier in the day, and allow you to make a decision. By 8:00 P.M., it may be too late to accept a new patient, and you can actually have a view into bed availability, including forecasting and prediction, so that you can know which patients you can accommodate and make that decision and actually serve more patients.

What is the benefit that we get from this? We were able to see about 8.5% more patients in terms of hospital transfers. That's roughly about nine to 10 new patients per week. This module is live only in one of our campuses. It's highly encouraging, being able to see nine to 10 more patients per week. The second part is we see a 75% reduction in time spent calculating bed accuracy. As I showed you on the previous screen, you can actually see all of the information at a level that's actionable and in a single view, as opposed to having to click to 5 different systems and having to decipher the data.

There's a 33% increase in patient acceptance rate. That goes to our core mission to see as many patients as possible. The second module that I want to touch on is staffing. As many of you may have experienced or may be aware of, there's a significant nursing shortage. Being able to balance our load across our nursing staff is super important for us. This module is giving you a view. Again, I'll try and explain the visuals. I realize the screenshot may be a little bit small. It's actually trying to do two things in a single view. The first is, it can show you the occupancy literally by every unit in the hospital. What is the bed occupancy? What is the nursing requirement? What is the nurse assistant requirement? What is the PA requirement, literally by unit across the entire hospital?

The second thing it will show you is what is the current allocation of staff. In a single view, you can actually see where you may be overstaffed, where you may be understaffed, and what are the adjustments that you can make to actually serve all of the patients and all of the demand across the entire facility. The second thing that we can do is we can forecast both the demand as well as staff availability over the next two weeks, so that you can actually develop schedules that can be seven, 14, maybe as much as 30-day projections to give people predictability in their scheduling over the next weeks.

This visibility allows us to make these adjustments, and if in the past, all of this coordination required multiple nursing managers spending hours a day looking at spreadsheets or calling people, coordinating through phone calls from one unit to another to figure out, "Hey, I'm short. Is there somebody? Do you have spare capacity?" All of that was done through phone calls, spreadsheets, and without a single coherent view of what was happening across the enterprise. The nice thing is that module is live. People can actually have a single view on what is going on, and the impact is, for us, is significant, which is we now have 400+ users who are using this module literally on a daily, weekly basis. The time that it.

Previously, nursing managers were spending 4 to 5 hours a day just managing nurse assignment, now they're able to do it in 45 minutes. Instead of discovering that you need to adjust a nurse from one unit to another a few hours into the shift, you can actually do that ahead of time, so that you give people a little bit greater predictability, and you're able to manage your staffing needs across the enterprise. Stepping back, I think we've now activated these modules over the last six months across different portions of our enterprise, it has been amazing. Many of those screenshots illustrate how we've been able to drive visibility and increasing automation across how we manage all of these sort of day-to-day operational tasks.

It's important to keep in mind, though, that technology is an excellent enabler, but carrying the people, the process, is an equally important pillar in driving these changes in a complex business. I'm sure all of you see that in your experiences, but for us, it's been very important to do this in a way that you actually are able to sit side by side with Palantir, design the tools, but at the same time, co-design them with the people in process so that they go hand in hand to achieve success. It's not just that you can enable technology and it'll magically just work. We're highly encouraged by the benefits of automation.

Being able to move the conversation from just the mechanics and a litany of data in different systems to a coherent view where everybody's operating off of the same base has been excellent. The walk and the run stage that we're pushing on now is how do we layer in AI and predictive technologies so that we can actually forecast what is happening, so that we can plan not just in the moment, but forecast into the future? The second thing is we can actually make recommendations. I showed you a screenshot where you can have visibility on where we may be over and under staffed, but the next thing we want to do is actually make recommendations so that you can actually simply accept those recommendations for the most part, maybe after a little bit of a manual review.

As we build the confidence, then we can automate some of these recommendations and run a closed-loop system. All of this journey is driving towards increasing tools, increasing automation, maximizing throughput and efficiency, with the eventual goal of serving as many patients as possible. Thank you so much.

Sasha Spivak
Head of Corporate Development, Palantir

Another round of applause for our first half of speakers. All right, we're gonna give everybody who's here with us live, who's here with us on the live stream, about 20 minutes to get a coffee, to get some food, to connect with each other, and we'll see you right back here in 20.

All right, we've got some incredible speakers left to follow, some incredible demos. Hope you're energized, hope you're caffeinated, and hope you're ready. Allow me to please introduce Justin Herman, CIO of Panasonic Energy North America.

Justin Herman
VP and CIO, Panasonic Energy of North America

Hi, everyone. My name is Justin Herman, CIO for Panasonic Energy of North America. I'm extremely excited to be here today, you know, during the break, I was chatting to a few of the keynotes, there's a common thread that's coming out here, is the partnership with Palantir and how we iterate with Palantir to create business value. Today, I want to take a little bit of time just to share with you the Panasonic Energy North America or PENA story, and how we've partnered with Palantir to create value within the enterprise itself. Behind me is one of my favorite quotes from one of my favorite leaders. I won't tell you who that is. I'll let you search that and see if you can guess who that is. What do we mean by the impossible?

What I'm holding in my hand here, I'm sure many of you know what this is, this is the highest quality, safest, most cost-effective lithium-ion battery. A show of hands, real quick, who drives EVs within this audience? Quite a few put their hands up, you're welcome, by the way. There are between 3,000 and 10,000 of these lithium-ion batteries in every single EV on the road today. In an industry that has grown exponentially over the past decade. In 2011, there were approximately 22,000 EVs on the road, and it's projected in 2025, there'll be over 23 million on the road. Demand is only growing, we need to be able to meet that demand. At PENA we started our operations in 2016. We shipped our very first cell in 2017.

We reached 100 million cells shipped February of 2018, 1 billion one year later, to date, we have shipped over 7 billion cells. This type of growth is not only nice to have, but it's an absolute necessity. How is this made possible? Well, it was made possible by the vision of Panasonic and its leaders, as well as our journey towards achieving Industry 4.0. Industry 4.0, as you all know, is a pretty loose term, right? We all have different meanings on exactly what that means. Let me explain to you at PENA what that means for us. At PENA when we talk about Industry 4.0, we really talk about integrating multimodal data into our manufacturing process. We look at upgrading legacy systems and manual processes and streamline them into more modernized versions of that.

Discovering inefficiencies, but not just discovering the inefficiencies, acting upon those inefficiencies, and then leveraging the foundation of incremental value and agility to rapidly scale our capabilities through technology. Through cutting-edge use cases, we build a framework of robust edge integration that we're unable to unlock key capabilities for our business. What does this all lead to, right? What is the goal? The goal is to create a connected data infrastructure which drives value, where we can connect disparate data systems extremely fast and pull meaningful value from these various systems. This means, or this leads, I should say, to quality improvements, safety improvements, and also it frees up the time of our resources to focus on more value-added activities. Of course, there's always a multimillion-dollar ROI associated with these types of activities. Where did it all begin?

Well, we like to reference Panasonic Energy as a 100-year-old company that really acts like a startup, right? Within our culture, within Pana's culture, we really take innovation and marching into the unknown as a core value, and it's ingrained into our culture. By partnering with Palantir, we will further enable that innovation. Right over here is a quote that our president, Allan Swan, likes to mention quite a few times. As you can guess, that group of 4,000 people is really referencing PENA, and it's pushing into the unknown and building towards and meeting an ever-increasing need. I'd like to take a second and dig into what it really takes to achieve Industry 4.0. At PENA, we identified two challenges.

The first one being business value versus speed, and the second, the smart factory versus traditional infrastructure. For us, developing use cases was absolutely critical to demonstrating key value to demonstrating value to our key stakeholders. Investing in impactful use cases can be quite challenging and bandwidth intensive when you try to achieve this in advance of buying. In addition, when you look at the factories and the various machines and the variety of timelines that they were implemented and the data sources that it's producing, this introduces significant complexity into modernizing. Any amount of downtime for our operations can be extremely impactful. Of course, delivering value at speed keeps you current with the industry, but for us, it was really demonstrating value to our key stakeholders. This was extremely important for us.

I recall, one or two FounderCon ago, I can't remember the exact question that was asked of Dr. Karp, but his response to that was: "Take your timeline and cut it in half." This really resonated with us at PENA because that is exactly how we think. With Palantir, we were able to do this. Now you might ask yourself the question: How were you able to do this? I believe the key differentiating factor for us was the Ontology itself. Behind me is kind of a little eye chart, but this is, this is the back end of our Ontology model at PENA, and it's taking all the disparate systems and integrating the disparate data into a reusable, highly valuable data asset. To us, the Ontology is more than a tool, right?

It's really mapping our processes to create real enterprise value. Another eye chart for you. Once you have the Ontology, right, and all your business processes can be contextualized within that Ontology, you create what we call the manufacturing Ontology. Once you have that, we can then connect through this any external systems to further compound the value, leveraging Ontology SDK. I'd like to talk to you about connected operations and one of our use cases. This use case, in particular, was in our electrode control tower, and it was delivered within one month. Actually, the first working model of this was delivered within one week, right?

It involved a highly manual process where our engineer would literally take a thumb drive, plug it into the machine, wait for the data, take that same thumb drive, visually inspect the machine, download the data on the machine, and then run it through their models. This entire process took about 4 hours. Within one month, we were able to give them the tools that took that 4-hour process and now condensed it down to 15 minutes with significantly improved data integrity. Five months later, what we're looking at from a speed to value perspective is not only that one use case, we have six additional use cases, and these six additional use cases run over predictive maintenance, material traceability, really connecting our operations together and delivering contextualized data within the organization, which is driving value to our stakeholders.

We have significant more use cases in the pipeline, so much so that our team is actually putting in some guardrails to make sure we can meet that demand. To wrap that up, these quotes you see on the board are actually from our end users, right? Because at the end of the day, you want your end users to be happy with the solutions and the tools and the data and analytic tools that you're providing to them. This was—t he reason why we have such an excitement within the organization right now has to do with our partnership with Palantir.

The teams came together, the most significant thing that I saw throughout this engagement so far has been how Palantir came in and understood our business processes, sat with the operators on the lines, understood their daily challenges, and then took that and leveraged the technology to solve those problems. This helped PENA find value that we ourselves did not even know was there. In closing, I would like to thank the Palantir team for everything they have done for us. Our partnership is strong. We're at our infancy. I am extremely excited to what we will achieve in the future and how we will achieve the impossible. Thank you for your time. I hope you enjoy the rest of the keynotes.

Operator

Please welcome Chief Executive Officer at Jacobs, Bob Pragada, and EVP and President, Divergent Solutions at Jacobs, Shannon Miller.

Bob Pragada
CEO, Jacobs

Well, thank you, everyone. Good day. It is, it's a great privilege and honor to be here with everybody at Palantir. On behalf of the entire Jacobs team, I want to thank Palantir for the invite, and the opportunity to speak to all of you. Before diving in, I did want to share with the group a bit of a personal story. This town, Palo Alto, has a very unique and special meaning to me. Today, as we were walking from the Sheraton with my colleagues, down University Boulevard, it was a pretty reflective moment for me, and it was reflective from this context. There are three events that happened in this in this town that have forever changed my own life.

The first was, I did have the privilege and honor to maybe go to school down the street for grad school several years ago, but that definitely did set the course for things that I had the opportunity to do later in life. Secondly, I started off my career in the service. When I finished grad school, went back to the service, my first job out of the Navy was right here, back in the Bay Area, very close to Palo Alto. That was unique, but that second one really wasn't the piece that was life-changing. My younger son was actually born about 3 km away from here at El Camino Hospital, while I was here.

The third, which actually just happened recently, is I had the good fortune to send my older son to school here in at Stanford. My quick plug for Go Card, go Stanford. It's great to be here in Palo Alto and love to share some of those experiences as well. Maybe by a show of hands, 'cause I know we have some clients in the room, as well as some partners. Who here knows who Jacobs is? Okay, that actually was gonna be more than than I suspected. Interestingly enough, you know, Jacobs is one of the world's largest engineering and technical services companies around, U.S.-based.

Before getting into some of the mechanics, I think it's important to set context on where we came from. We started off with deep engineering roots in the chemical process industry, and over decades, diversified into other markets. What Dr. Jacobs started was, we're in the business of making our clients' business a better business, hard stop. That's producing outcomes and providing solutions for some of the world's most difficult issues. Today, the end markets that we're in, critical infrastructure, think water, transportation, environment, energy, and environment, everything going on around energy transition, advanced facilities, deep domain expertise in life sciences manufacturing, as well as semiconductor manufacturing, and then national security. We've been in the middle of some of the biggest missions around the world, supporting the U.S., U.K., and Australian government.

I won't bore you with all the statistics. Needless to say, we're a big company, and we do a lot. We do a lot in the world. What's probably more interesting, and I love the Simon Sinek reference on start with the why. You know, why do we exist as a company? Who are we? Really is, we are in the middle of some of the biggest megatrends that are affecting us in the world. I missed the one there, climate response. I heard a statistic the other day, is that it is kind of dark, so bear with me on the darkness, and then we'll bring it to light.

If you think about the probability of some kind of nuclear holocaust, there is a probability there, I'm not even gonna mention it. If you compare that to the probability, if we do nothing as a society on solving climate change or at least slowing climate change with everything we're doing around climate response, you know, that's 100% probability that we won't exist. That's my dark comment. Let's get back to light. What we're doing is we're in the middle from that science-based domain expertise that we built over decades of addressing each one of these issues. Kind of when you go around the matrix and you see it coming down to, what are all of these issues surrounded by in the physical world? Data. All about data.

Our partnership with Palantir has really accelerated that effort on taking deep domain science-based experience, coupling that with strong data science and data platforms to really reinvent the way we're solving these issues for our clients. Some common themes there, I think some of my predecessor speakers spoke to them. Speed, that goes without saying. You know, we at Jacobs, we can customize for a single client in a single market, in a single geography, but being able to do that at scale across multiple sites around the world, that's what the Palantir partnership brings to us.

Then specialization, you know, we have deep domain expertise in the science, and with Palantir, deep domain expertise in the data science, whether it be any of the platforms we're talking about today, that combination is powerful, and it leads to this. Our vision for the world is to take that deep domain science expertise, couple that with data science and data platforms, and make a positive impact in the world. We do have a use case. My colleague, Shannon Miller, is not gonna join me on the stage. I'm gonna leave the stage and give the stage to her, but look forward to interfacing and interacting with everyone moving forward. Thanks, everyone.

Shannon Miller
EVP and President of Divergent Solutions, Jacobs

All right. Thank you, Bob. Good day, everybody. As Bob mentioned, I run our digital data, cyber, and cloud solutions business at Jacobs, where we're really focused on solving some of the most complex problems in critical infrastructure. I've been really honored and excited to develop this partnership along with our team, with Palantir, to really solve the most complex problems for critical infrastructure and the solutions that are around that. As Bob mentioned, our goal at Jacobs is to create a safer, more resilient, and connected world. I can't think of a use case that isn't more exciting than focusing on how we improve the water cycle for the entire planet. If you think about this is critical for a lot of reasons.

Not only climate change is stretching our need to expand our ability to leverage our resources, the need to reduce our carbon footprint, as Bob said, not soon, but quickly, very fast, and then, of course, the staffing shortages and the waves of retirements of some of our most experienced and excellent operators across the entire utility community. Although this might not be the most glamorous use case, our friend Bernardo at J.D. Power got me really excited about buying a new car. I probably don't need to buy a new car. I'm gonna get you excited about how we're harnessing the power of artificial intelligence to improve water treatment. We're gonna see how AI acts as a copilot.

Sean did a great job showing how we set this up, from not only using it in real time to optimize decision-making while we're operating our utilities, but also when we think about when the stakes get higher in the wake of a massive storm, and then, of course, for long-term planning and how we deploy our assets going forward. This allows our operators to interface with troves of data, complex data, using natural language, as we've seen today. I'm really excited to show three use cases of what we're working on together with Palantir and what we've got getting deployed here to our customers very soon. All right. First up, we're going to talk about how we're operationalizing our data.

This is Jacobs' Aqua DNA platform, where we combine our deep domain expertise in wastewater treatment, coupled with Palantir and the Foundry platform and AIP. First up, if you take a look at this, we're looking at the entire city. This is an entire wastewater treatment system on our Aqua DNA platform, powered by AIP. If you think about it encompasses millions of people. It's the city's core infrastructure, it's the drainage, the sewage, the treatment plants, and everything that happens in this complex system. It's the valves, the sensors, and the catchment basins. At any moment in time, our operators are making thousands of decisions to optimize what they're doing. If you think about it, first of all, if you send too much water to a wastewater treatment plant, you could flood it, causing millions of dollars in equipment damage.

That would take down the ability, the city's ability to treat wastewater. If you don't manage your storage appropriately ahead of a storm, you could run out of critical capacity, causing an overflow and leaking polluted water to the local environment. To guide these decisions, systems are typically outfitted with numerous sensors, and what we've heard from our utility operators is in the last two years, they've captured more data than in the last 20 years combined. Think about how difficult that is. The flow of information, no pun intended, becomes very difficult for them to leverage and optimize their systems. On top of this, with all these assets being added to the networks, it introduces a new vulnerability for cyberattacks. All right.

In the face of all these challenges, let's see how Aqua DNA helps me monitor our day-to-day operations with the data that we already have. This is just day-to-day operations in a utility treatment network. First of all, everything is centralized in Foundry. We've been able to connect all of our data from design, operations, security, and bring it all together within the Ontology. This means we can compound our returns and introduce and have fewer vulnerabilities. We're able to rapidly ingest and process all the disparate sets of data which were previously disconnected, creating confusion and often unproductive noise in the system. Again, it's all being processed real-time into standardized, reusable models or our Ontology.

AIP is enabling me to proactively monitor for the changing conditions in the system that might require human attention, such as maybe a cyber breach or even a clog. We were talking last night, surprisingly, one of the things that we find often in clogged wastewater treatment systems are blue jeans. We were hoping to maybe recycle some jeans for y'all here today, but we didn't do that. Anyway, we'll see here that there's an alert, that there's a potential clog. Let's assume it's some Levi's jeans in one of our drains. I'm gonna zoom in on this clog and see what action I need to take. Again, we've got real-time, digestible data of what's happening and where this clog fits into our system.

It is gonna require maintenance, I obviously want to understand how urgently I need to dispatch a crew. I also know that a storm is forecasted for this weekend, and I want to make sure that I'm taking that into account in how I make prioritized decisions. With that in mind, I use the AIP assistant to help pull in that forecast for the upcoming storm, and it simulates it across the entire system. AIP is going to pull in the weather forecast data. It's going to bring it in as a new layer and estimate how much wastewater is going to flow into the system. The AI has been able to call on our specialized Jacobs design simulation models, such as our Flood Modeller platform, to really understand the impacts of this storm.

I can see now that we're getting two new alerts with this simulation, with one showing that this clog is going to result in an overflow. Clearly, this warrants a high priority fix. I'm gonna go ahead and assign maintenance in the platform, and then that automatically updates their schedule, and it redirects the crew. This conversational real-time scenario model is really, it's a huge step forward from the complex manual work that would have been required to perform this analysis in the past and get that maintenance crew to address the clog before the storm comes. This is really due to our ability to query massive amounts of data in real time and map out different scenarios so we can make the critical decisions in a timely manner. Next, we're gonna amp up the stakes.

We've got a storm coming, and we're gonna see how we can leverage our Aqua DNA platform to manage the forecasted storm. Here I'm looking again across the entire system, but now I'm seeing how this storm is going to impact us in the future. The platform tracks where wastewater is entering the system and continually updates our forward projections, feeding our risk models and giving us alerts. Only now, in addition to those alerts, we have a few types of adjustments that are going to take place automatically without me needing to dispatch any crew. These are being executed by our AI Agents that we've been talking about today in some of our other stories. They're trained in our system's historical data, and they help adaptively keep things in check throughout the entire storm.

They're combining machine learning with our expertise and our Flood Modeller platform to really define optimal system performance. It's important to know I'm still in full control. I can monitor all the actions that the Agents are taking, their effects, and adjust, override, or disable their behaviors. They're taking care of hundreds of optimal decision-making adjustments for me, which could include things like adjusting valves, diverting wastewater, and updating some of the treatment parameters. With that being said, some actions are clearly too consequential to be taken automatically, right? Here we can see that AIP has surfaced an alert. It's showing me that there's a risk of an overflow in the next 2 hours, and it's going to make some recommended diversions to avoid that overflow. However, it is showing that it requires my input to take action.

I'm going to dig into those recommendations to understand what's driving it and the potential trade-offs associated with it. It helps me see the path leading to the overflow risk, as well as its recommended diversion of the wastewater from a northern to a southern catchment basin. I can see the projections and the predicted metrics, helping me understand the full picture of what's likely to happen if I make the diversion or if I don't. There's a trade-off here, as you can see. On one hand, there's a 24% risk of overflow if I take no action, and on the other hand, the proposed diversion routes wastewater to a less effective or efficient wastewater treatment plant. I'm increasing the total treatment cost that's above a threshold parameter that I have in the system.

This is why the AI is unable to facilitate this action automatically. It needs my input and my approval to progress. I might also want to dig into what's behind these calculations, I can take a look at what assumptions, data models, calculations, and simulations were behind making that recommendation. I can do that right here as well. Based on this information, I do decide that the best path is to go ahead and in fact approve this diversion. I approve the diversion in the system, downstream actions flow automatically, and approval processes and communications kick off as well. I can monitor them and see how it plays out to ensure it plays out as I expected. In a situation like this, the automatic capture of all of our decisions and reasoning is essential.

We need the platform to keep a full history of all of our actions, whether it was done by a human or by the AI, along with the associated context and the state of the world, and we can do that all within AIP. This enables me to show regulators why I took decisions, what the circumstances, and what alternatives were presented when I made those decisions. I can also use this to inform future operations, planning, as well as train the AI for future events. The great thing is all of this is controllable. AIP allows me to be in the loop of all of our AI automations, the data they're using, the patterns of reasoning that they're relying on, the guardrails that are set around it, and it makes sure that it's performing accurately. It also enforces these guardrails.

The level of actions, the criticality is where a human needs to be in a loop, is decided ahead of time. Then we can also dig in further to those actions, how it was all generated, and of course, report, create a report at any time. All right, in the last scenario, we're going to look at how we leverage AI, AIP for future planning, long term. In the aftermath of the storm, it sort of, you know, ticked off a few things for me. I want to think about long-term planning of my wastewater treatment facilities. I want to reexamine my long-term plans. I want to not only use historical information, but also forward-looking data, right? To really create a complex scenario for my simulation. Let's go ahead.

We're going to build a case study that looks 30 years into the future. See if that played out. All right. We're first going to start with our current world and layer in things like population growth, urban development, and of course, climate change. We're first going to ask the AIP assistant to pull in plans for development and the forecasted population growth. What we're going to do is we're going to take the top 10% of storms last year and then add 25% to that, which is what's been recommended by the AI, and that's going to build our case study for future planning. Now I want to test how our current infrastructure would fare in this scenario. As the simulation runs, it shows me that there are several overflow risks.

I want to look into the infrastructure-level changes to prevent this from happening well into the future. In the past, being an engineer, I know I used to do this. I'd, you know, sit down and work on simulations and think about different analyses and simulations that I would do to prevent this, and it would have taken maybe months or years to really think about what the potential impacts are. Now with AIP, we can use this, you know, immediately in real time to help us understand what exactly can be done. What it shows me is adding wastewater treatment plants or storage tanks would of course, solve this problem, but at a huge monetary and environmental cost. We're going to see if we can improve our network's resilience by improving our ability to divert wastewater within the system.

For example, I asked the AIP assistant if any wastewater treatment plants or tanks have spare capacity in the system, and it identifies three for me. I ask for it to make recommendations or adjustments to the system to improve or rebalance capacity. It takes into account all the other factors that I've outlined: population growth, urban development, and it recommends that I consider installing two new pipes. I can adjust this further, but for now, I'm just going to go ahead and generate a report that captures this case study, gives me my recommendations for my stakeholders, shows me all of the data and simulations that went into the analysis.

Again, if you think about it, this would have taken us a long time to pull this all together with probably a lot of different opinions around what it looked like, and tweaking any of those assumptions would have, you know, created a lot of recycle and churn. We're really excited about where we're going. You can see we're developing an end-to-end solution that brings everything together, whether that's security for mission-critical infrastructure. It provides control that keeps humans in the loop at every step of the way where you prescribe it, and it's real-time. It provides us actionable AIP assistance to deliver those in-the-moment decisions for simulations, long-term planning, and it helps us make sense out of the massive volumes of data that we're creating all of the time.

It gives our operators this natural language interface to interact with to really make these complex decisions when they need to be making them. Of course, it always has the ability to show us the context, the inputs, why decisions were made. I'm excited, if you can't tell, about our relationship with Palantir, mostly because it's going to allow us to solve problems at a pace and a scale, and it's really that pace, right? That's gonna have a tremendous impact on the world around us, for all of our people, our customers, our stakeholders. Although we talked about just wastewater treatment today, I hope you see the applicability to a lot of different markets or industries and sectors around how we can leverage the power of AIP, deep domain expertise, as Bob talked about, to really improve our world around us.

Thank you very much.

Operator

Please welcome from HCA Healthcare, AVP of Clinical Operations, Canaan Stage, and Director of Management Engineering, Ben Spears.

Canaan Stage
Assistant VP of Clinical Operations, HCA Healthcare

Thank you all. It's really great to be here and to be able to share a little bit of our experience with you. As introduced, I am Canaan Stage. I'm the AVP of Clinical Operations, a registered nurse by trade.

Ben Spears
Director of Management Engineering, HCA Healthcare

I'm Ben Spears. I'm a Director of Management Engineering for CT&I, and historically, I've been on the labor management side, and that's I think important part of the story is historically, there's conflict between these groups. For those not in healthcare, give you an example, when I was dating my wife, who was a nurse, her friends met me, found out what I did, and they were all petitioning for her to break up with me. I won them over, like I won Canaan over, but there is conflict historically, and you'll see why that's is important.

Canaan Stage
Assistant VP of Clinical Operations, HCA Healthcare

I think that's a great story to kind of paint that picture and kind of what we're gonna tell you a little bit about is how our teams come together and how we've been able to deliver a step change in healthcare that we're very proud of and we hope to see kind of continue to grow throughout healthcare. I'm gonna start with telling you a little bit about our organization, a little bit about our background, and really focus on the people part, the piece that I think is important. Then I'm gonna turn it over to Ben and let him share a little bit of the data port. There's a lot of people here that I'm sure are really interested in that.

That's just really not my cup of tea, hence why this relationship works so well and oftentimes works not at all. HCA Healthcare is over 180 hospitals throughout the United States, with a few in England as well, too. Our impact that we believe we have in healthcare is really painted perfectly in these numbers. 93,000 nurses work for our organization, and one of the things that I know you all are very well aware of is that healthcare experienced a quite a shake in these last couple of years with the pandemic and the challenges that that brought. One of those prominent challenges was around staffing and our ability to deliver the right patient care that we were trying to achieve.

That affected our organization as it affected many other organizations, it's what led to our desire as an organization to think about things differently. Our leader, Dr. Schlosser, created the Care Transformation and Innovation department or team or wing within our organization that is focused on taking a step change in healthcare and really delivering that, you know, through clinically led innovation. I call that out because that's where I want you to know why my importance on the team is just a little bit more important than Ben, that clinically led part. You saw that too, right? Jokes aside, I think that's been part of the lesson that we've learned throughout healthcare and why we've had challenges in progressing and moving forward, has been not our inability to listen, but where we were listening and how we were applying that.

Ben mentioned the relationship between the process improvement teams and our financial teams within our organizations, and then the clinicians themselves and somewhat of that balance or that approach that takes place day in and day out. A lot of that comes from the ability to have access and understanding to data and knowing what's there to be able to have those discussions. For us, our Care Transformation and Innovation team, our focus starts with building out really from the bottom up and starting with our clinicians.

We have two hub locations, which are really an opportunity for us to embed our members of our team, side by side with clinicians, learn from those clinicians, understand what their challenge points may be before we even begin to design or create operational challenges or challenges to those operations that are in place. We then have two other larger facilities, and why that's important to kind of touch on is the realization that we're in an area that requires a strong balance of risk and reward. The understanding that if these are high stakes and high pressure environments, changes that we make to our healthcare process can have very positive impacts on our patients. They also could result in some possible risk that we wouldn't want to see.

With litigation, that was brought up earlier, being a significant part of healthcare, unfortunately, we have to balance the idea that if we're going to make changes, we need to be sure that those changes that we take into account what the downstream impacts could be. The very best way to do that is to do that on site with a team and understand the challenges that they're having. The other part of that is being over 180 hospitals, there's a lot of customization that goes into the way that each individual hospital operates and each individual unit within that hospital operates. So understanding the why, the Simon Sinek, I'm a big believer in him as well, too, so I loved hearing that today.

Understanding the why of those challenges and why they may deviate from what is maybe an accepted standard of care, is really going to help us understand how do we attack that challenge and that change and move forward in our process. When we challenged the idea of taking on, first and foremost, staffing, particularly coming right out of the pandemic, you know, there was a lot of people that believed this was the right answer. It can't be done. We can't theoretically change the way that we approach this. Staffing for many, many years has been about taking what's available to you, and then as team members come in and labor team members come in, they give us their availability. They work on the shifts that they have.

We wanted to take a little bit of a different approach, and we wanted to understand and value our employees somewhat differently. So we presented this idea to a few different companies, and this was a direct quote from one of those companies: "It cannot be done, what you're trying to achieve." Another organization that we met with, we worked through part of the project, and we realized that maybe it couldn't be done with this group and this team. Then we found a new partner, and that partner was Palantir. I think what you'll see on the screen is that there were a lot of similarities and a lot of alignment between our organization and our thoughts, and where we could take the steps to improve that and to put this process into place.

We've heard several things called out today, the speed of deployment, the ability to integrate with our teams and be on site. I'm very fortunate to share actually, a few photos of our teams working together right on site. This is at two of our innovation sites, two different innovation sites. On this picture to the far left is actually one of our groups of Palantir team members, and someone here is in the audience from that team that we both very much appreciate the opportunity to work with day in and day out. What's different about Palantir for us, has been the willingness to understand the why, the willingness to embed themselves with our teams.

In fact, sometimes push us: "Hey, we haven't been on site in a little while. Can we go back?" That type of relationship and that type of interaction is really what has led us forward to being able to build forward a platform, that is, that is transformational. There's no other way to say it. What I think it's done as a nurse myself, it's empowered nurses to have better conversations about needs, better conversations about how we support each other, and better conversations about how we deliver the very best care we can possibly have. I will say that over time, I've learned to appreciate that process improvement idea and how we can work together through that.

At this point, what I want to do is turn this over to Ben, for him to bore you, I mean, to provide you a little bit of the details around how we achieve this new process and format.

Ben Spears
Director of Management Engineering, HCA Healthcare

Perfect. Thanks, Canaan, for it. This is what it looked like today. Unfortunately, you have, what are in the black boxes compared to the orange boxes is we didn't know, and we had no insights today and on those black boxes. We didn't know what the original schedule that was posted was because we didn't have change data capture, and so many changes happened after the schedule post. As a company, we talk about our scale, but what that looks like is 2,000 nursing departments with 3,000 leaders, us trying to teach them what's the right way to balance and post a schedule.

We have 20% turnover within those roles. It's very tough to get to those 3,000 individuals and tell them, "This is the right thing to do, and this is the right schedule to post." We saw a lot of variation, we saw a lot of bias, and to be honest, we didn't really know what that looked like. We had a assumptions. We flipped over to future state. We now have all these data points. We're able to see on the staff input side, system inputs.

We turned it on, we're like, oh my gosh, we understand that we need change management because this is a big change, I don't think we truly appreciated what the change was going to be until we were able to drill into our data and see some of the practices that have gone on. How our process works is we have the staff give us information about themselves, so we have the talent profile to understand what kind of roles they can do on the unit, how we get their schedule preferences. We marry that up with volume demand prediction, business logic to generate a schedule. We want that schedule to be 95% of the way done. We want the manager to then review that schedule before posting.

We strongly believe that the automation's not gonna do 100%. There's gonna be nuances that the manager needs to do, but in the current process that we have today, because it can't be done, it's relying on that nurse leader, and that's 10 to 20 hours going on every month with the nurse leader having to build the schedule across 2,000 departments, 12 months a year. This new process has been very successful. We've been able to get that number down to 1 hour, 95% reduction at some of the best utilized hospitals and departments that we have, and we're really excited. This is what it looks like. We're utilizing a handful of different workshops throughout the process.

We have the personal workshop, where staff members are able to enter information about when they want to work, request PTO, those type of things. We have the staffing workshop, which is where the managers will go in, run the schedule, hit the automation, come back, do their edits, post the schedule, and then we have the staffing workshop and the facility workshop to take care of all the edits that need to happen after the schedule post. Because we need the insight and we overview, the CT&I team and our executive leaders can use those dashboards to understand the process, look for ways to improve, and educate and coach, that feedback loop back to those nursing leaders.

We've really hit on the value to the nursing leadership, but to the organization as a whole, having Ontology and having this data is transformational for us. Today, what it looks like, a bunch of people like me are looking at the tea leaves and looking at the smokes and trying to figure out what contract labor and what kind of premium utilization we're gonna have this month and next month, and do we need to bring in these contractors at this rate to kind of fill our needs? We were able to build screens where we can turn that art into a science, and we can have everyone calculating it the same way. It's so easy to access the data. It's so easy for us to merge the data with other things.

The other use case that's been brought up is how do we now that we have all this information, we can almost do a forensic level analyze of each of these staff members' schedules. Let's marry that up with turnover information and see What are the root causes when we have these turnover? Can we identify issues, whether that be orientation process? Today, we know we have a best practice of what orientation should be. We've turned it on. We have the insight. We know what we say we should be doing and what's going on in our hospitals are completely different.

Let's tie turnover rates to preceptors, or when we say we want one preceptor for the whole 12 weeks, are we actually doing that, or are we bouncing around 12 other, 12 other people, and they're not feeling involved? This is kind of the roadmap that we are. We, we kicked off the partnership in January. We've quickly scaled. I'd say this slide is a little optimistic, and it looks pretty. We've, we've had some road bumps along the way. I'd like to share that with this group. I think the biggest thing in innovation, right, and especially in healthcare, as Kane mentioned, is there's this desire to always be perfect.

When we have bugs or we have these fixes, it's even the quote that I heard yesterday: "It's rewarding to close the ticket." I think we've had a. What happened is we kinda got into this three-week period at the end of March and leaking into April, where we have our short-term architecture as we transition away from our existing solution, and as we move to the long-term architecture, we had some, I'll call it issues with the short-term architecture, and we were spending a lot of time trying to fix what the problem was right now. When we go to August, all that work's going to be thrown away.

We had to look ourself in the mirror and be okay with breaking a couple of things and saying, in the long term, to hit our medium-term goals, it's gonna be okay to break some things at the smaller scale. That's really what this department's about, is understanding that it's okay to tinker, it's okay for things to break in a safe environment. We're gonna may ask our department leaders at these hospitals to wear a little bit of the burden for the short term, knowing that we're gonna build a product so much better six months from now, and that product's gonna be able to serve all of our nursing leaders across the company.

The one thing I want to say in this process, and really want to call out Palantir, 'cause as I've been really impressed, is they understand we're very aligned in what our goals are. There's been moments, especially going from 25 departments to 50 departments, where the scale, the difference between the departments we are expecting to see or that we experienced with the 25 moving to 50, there was differences. We even gave them out. "Hey, here's a short-term fix. We're gonna give you this short-term fix, and it's gonna create technical debt, and it's gonna make life harder down the road." They're like, "No, we do not want to do that. We're going to do the right thing. We know it's gonna be harder now.

We're gonna do extra work now, so we can hit our goals. A common theme today has been speed, and we're quickly transitioning out of that, building the product and transitioning into speed. We plan on going to five more facilities in August, with ultimately the goal to hit 40 by March of next year. Trust was another big thing that came up this morning. I think without Palantir, could this be done? I'm not sure, but I do stand up here today and have the trust and the faith in our team to hit our goals. Just a lot of them are in the room or listening. I wanna give them a thank you as well.

Canaan Stage
Assistant VP of Clinical Operations, HCA Healthcare

Yeah, I think the last thing that I will add to that is, you know, when we were younger, we were probably the guys in the back of the class that were told we were disruptors, right? That was not a good thing to be. In healthcare, when we've been able to disrupt some of our general thinkings and some of our general thoughts and do it with the support and the power that Palantir has brought with the data, it truly does become a very valuable resource. The feedback from our frontline team members obviously is one that can be a little bit uncomfortable at times.

Overall, time and time again, we hear the words thank you. Those are the words that we wanna leave with the Palantir team as well, too. That is, thank you for an exceptional partnership. We look forward to sharing our story with you more. Hope you all have a great day.

Sasha Spivak
Head of Corporate Development, Palantir

A final round of applause for our customer speakers today. All right, we're getting ready to break for lunch. We've got a packed afternoon planned for all of you with a customer demo expo, a series of meetups, a series of whiteboarding sessions. Before we close things out, I'm very honored to introduce a member of the team, Morgan Schwartzman, who is behind the past few FoundryCons, and today's AIPCon, to close things out.

Morgan Schwartzman
Event Specialist, Palantir Technologies

Hi, everyone. I am responsible for all personalized agendas for FoundryCon, and now AIPCon. I used to do this in a very manual and painstaking process via several spreadsheets and data collecting sources and templates. This process was extremely error-prone, had bandwidth and functionality limitations, and if there were any scheduling updates during the programming, there was no real way for us to communicate that to you. AIPCon is now completely powered by Foundry. I used Dynamic Scheduling, a product that launched at last FoundryCon in February, just four months ago, to personalize hundreds of agendas tailored to all of your goals. Even the people who were on the wait list, you also got your personalized agendas. You saw our keynotes ran slightly over today, we're going to go in and adjust your schedules.

Previously, to make a change like this, it would have taken a crazy amount of hours and manpower. Using AIP, instead of making an individual change to all agendas, I can simply provide the LLM a description of the change that I need to be made. It can be applied to all individual agendas at once. Now, imagine some customers of you here with us today are responsible for managing flight swaps for hundreds of aircrafts when they need maintenance. Others are staffing thousands of nurses across hospitals. Still, others are managing complex production schedules. It's the same product across industries. The adjustment that we just made live together has now automatically propagated and updated any downstream effects to meetings and other sessions. If you've opted in for our text messages, you will receive your scheduling update directly to your phone.

For those of you who dialed into the live stream, thank you for joining us. We hope that you take the stories that you've heard this morning and continue building on them. We encourage you to participate in this trusting environment and connect with each other. Now, on to lunch.

Powered by