Oracle Corporation (ORCL)
NYSE: ORCL · Real-Time Price · USD
165.96
-7.00 (-4.05%)
At close: Apr 28, 2026, 4:00 PM EDT
165.76
-0.20 (-0.12%)
After-hours: Apr 28, 2026, 7:59 PM EDT
← View all transcripts

Oracle AI World 2025 Keynote Vision and Strategy

Oct 14, 2025

Lawrence Ellison
CTO and Chairman, Oracle

Hi, everybody. Let's see. Okay, says AI changes everything. That's a kind of a big statement. Everything. I think it's pretty close. Okay, I'm going to talk a little bit about how Oracle's been responding to these changes. That started. I guess they started in earnest when ChatGPT 3.0 came out and suddenly AI models started sounding a little bit like us. There are two big phases of this AI technology. One is the dawning of the AI era, which is a bunch of companies building these enormous AI models. They're actually an AI model right now. What's called a multimodal AI model is made up of several neural networks. Like your brain has several parts. It's actually a very, it's kind of a perfect analogy. To do vision, you use one part of your brain. To do language, you use a different part of your brain.

When you build an AI model, you use a different neural network for vision. You know, seeing something, seeing its edges, seeing its shape, seeing its color, seeing it move. You use one neural network for seeing it and quite a different neural network for recognizing what it is, identifying it, and then a third neural network to classify it and organize it and reason with that data. Very much like our brains, the modern AI system and a modern AI model is a multimodal model. It has multiple neural networks to look at different kinds of data, video data, textual data, hearing, things like that. What's going on right now is a series of companies are spending vast fortunes training these AI models on publicly available data, on the Internet, enormous amounts of data. This has become this AI training.

It's very apparent after a few years of it, it's the largest, fastest growing business in human history. Bigger than the railroads, bigger than the Industrial Revolution. I mean, it is a whole new world that is dawning. There's the building of the models, and once those models are built, it's the actual using those models to solve very important problems. Early diagnosis of cancer, for example. There'll be a lot of surgery that is more precise and more accurate than human beings can do. Robots will be much better surgeons than human beings can be, for all sorts of interesting reasons you might not guess. The big opportunity in AI training is upon us. Oracle is a major participant in building data centers to do AI training.

The much, much larger opportunity, the one that will truly change the world, isn't the creation of the models themselves, the training of the models. What will change the world is when we start using these remarkable electronic brains, and that's what they are. These are remarkable electronic brains to solve humanity's most difficult and enduring problems. There's one thing that's kind of interesting, where Oracle's explicitly involved, which is, as I said earlier, these AI models are trained on publicly available data, all the data on the Internet. If you look at ChatGPT or Anthropic, Grok, Llama, what have you, they're all trained on all of the data on the Internet. In other words, publicly available data.

For these models to reach their peak value, you need to train them not just on publicly available data, but you need to make private, privately owned data available to those models as well. That's where Oracle plays a particularly important role, because most of the world's high value data is already in an Oracle database. We just had to change, and it is past tense, we had to change. We did change that database so that Oracle database can take the data that's already in the Oracle database and make it available to AI models for reasoning. The AI model can reason on not just public data, but on private data. AI is an incredible tool. Some people think it's going to replace all human beings in all of our human endeavors. I don't think that's true. It will help us solve problems we couldn't solve on our own.

However, it will make us much better scientists and engineers and teachers and chefs and bricklayers and surgeons and what have you. We've never built a tool, anything like this. I press the button and the slide didn't move. Do it again. I pressed the button again and this is not an AI device. One more time and then I'm just going to say the word slide. Ah, there we go. Okay, I did both. Who knows why? Who knows why it moved? I remember when this wasn't called AI World. I remember when it was called Cloud World. A long, long, long time ago, I had a presentation about AI. Even though it was called Cloud World, I was still allowed to do a presentation on AI. I said, is AI the most important technology in human history? We're going to find out soon.

It's pretty clear the smartest people I know are working. What? I didn't press the button. Not pressing the button. Can you back up the slide please? Thank you. I'm just going to put that down. We're going to get a better one next time. The smartest people I know are investing fortunes. To be specific, they're investing their fortunes in building and training these AI models. That's how important they are, that's how extraordinary they are. By the way, Elon, Mark, Sam, in alphabetical order, all really smart guys, extraordinary people, people that, you know, this AI thing, maybe it's just a bubble. Maybe it's not that big a deal. Maybe it's just a bubble. The Internet really, I mean, the Internet was a big deal. Most people, you know, if you look at the fortunes created on the Internet, I mean, certainly worked out for Google.

Search searches seem to have paid off. Elon on that list did start PayPal and that paid off nicely. I know that I've asked him, Elon, he said he definitely didn't put a dime into pets.com. The thing is, when people talk about bubbles, what is a bubble? People get exuberant. The Internet was an incredible new technology, remains the foundation of computing. We couldn't have AI without the Internet. It's incredibly important technology. People started confusing Internet companies like a PayPal or even worse, Internet search, worse meaning better, with pets.com. I mean, the fact that if I can sell pet food in an e-commerce site, that suddenly means I'm an Internet company. Not really. Yes, there'll be people spending money on AI because almost every tech company these days calls themselves an AI company. They're not. A lot of them are not.

AI in terms of its value, this is the highest value technology we have ever seen by far. Next slide, please. AI. It's interesting because it's called artificial intelligence, as opposed to artificial perception. It does perceive, it hears, it smells. Think about smelling. The idea that you can pick up chemicals that are just drifting around in the atmosphere and figure out what those chemicals are. Dogs can smell cancer in patients. We should be able to do that with AI. We should be able to. In fact, there's a project I know of called the Dog's Nose that I'm actually a part of. We're building sensors. We're building sensors that can smell cancer or other illnesses. The AI perceives. It's got the part of the brain that hears and sees in addition to reasoning. It can read street signs, it can read a page on a book.

They take a look at you, it'll recognize you, identify the song that's playing. You can talk to AI and ask it a question or type it out. AI can reason logically very quickly using language the same way we do and mathematics. I remember I was over at Tesla looking at the Optimus robots and I was curious just how the robots were going to learn and then just thought about it for a minute and said, how would a robot learn how to clean your house or scramble eggs or play the guitar? It would just watch an Internet video. It's connected to the Internet.

It can learn to play piano just like we would watching an Internet video, except it would do it a little faster because it can play the Internet video at very, very high speed and learn to play that piece by Chopin in about five seconds. I know my kids can't learn to play piano that fast because I listen to them practice every day. Five seconds is out of the question. AI robots will be much better surgeons than the best doctors. There's this very famous surgery started by Dr. Mohs who actually would take cancer lesions off patients' faces. He was so famous for it because he did the least damage. He took the least amount of tissue off your skin. Cosmetically he had fantastic results.

What he did, he would take a couple of layers of skin off and then take that skin over to a microscope and look at it and see is he taking any healthy cells yet? I mean, how deep does the cancer go? He's back and forth, cut a little bit of tissue, look at it on a microscope, cut a little bit more tissue, look at them. AI robots just don't play fair. Their vision, the AI, the vision on the robot, it is microscopic. They can see. They don't need a microscope to see individual cells. They don't need a microscope to see where the cancer ends and the healthy tissue begins. Their coordination, it is exactly. They're better surgeons than we are, not because they're smarter than we are, but because they have better hand-eye coordination.

Their eyes are way better than ours and the precision of their hands is way better than ours. They can cut between a layer of healthy cells and a layer of cancerous cells. It's truly stunning to watch and it will make us all, and it'll be very reassuring when we can go to a doctor who's using a robot to do the surgery. The surgery will be perfect. I said this earlier, but it is so interesting. It's built just like the brain's specialized neural networks, one for vision, literally a convolutional neural network simulates the visual cortex. The visual cortex has five layers. It's right in the back of your head. Evolution produced the very first layer of V1 just so the animal could first perceive edges of something they were looking at.

Then it got up to 4 for color and the very famous V5 for detecting motion and threats in the environment. The ViT is the vision transformer that then took that bitmap. The convolutional neural network produces a bitmap, an image, a bunch of pixels if you will. The vision transformer then compares that to things that you already know and you can recognize faces and things that are familiar. That's different. That's a transformer. That's a ViT. ViT neural network for a holistic understanding of the image and recording it. Version 3, version 3 of ChatGPT was the one that used the huge transformer networks that did comprehensive language and reasoning. The only drawback with that, the transformer networks, is because we had facial recognition long before we had the ability to converse and reason using language.

With the GPT network, the generalized pre-trained transformer network, which is what is doing the language and the reasoning. That requires enormous amounts of compute. Thus the requirement for fortunes to train these models, these networks. The transformer network is much bigger and much more complex than some of the other networks. As you think, reasoning is more complex than vision. There are networks for certain types of mathematics. Anyway, it looks a lot like the brain. The brain has a lot of—it's amazing, 20 W human brain. 20 W. You know anyone screwed in a 20 W light bulb? No, that's not a lot of light. It's enough to run 86 billion neurons and give you vision and balance and reasoning and language and creativity and the ability of deduction and inferencing. You can do all of that with this incredible, what Elon calls a 20 W meat computer.

Sensation, recognition after recognition, the ability to reason on that. Again, the visual cortex is right behind the parietal lobe. Behind and below it, the prefrontal lobe. As you can see, on the left side is a big language center. The brain is very specialized. So are the AI models. We're not building a 20 W meat computer. We're building a 1.2 billion W AI brain. Did you ever try to do multiplication as fast as an HP calculator? These electronic brains, the AI, these AI models reason and they reason very quickly, and they can deal with a lot of data, and they can get to answers that we've never gotten to. This is a picture of a data center we're in the process of building. Actually, it's up and running. Part of it is up and running. Eventually, it's going to have a half a million NVIDIA GPUs in it.

By the way, to give you an idea, $1.2 billion W . What does that really mean? That's enough to power 1 million 4-bedroom homes in the United States a million. That's a pretty good sized city. I think we got a video on the construction.

Oracle is building the world's largest AI cluster for OpenAI in Abilene, Texas. The project began as empty land in June 2024 and is delivering GPUs in less than one year. The cluster will contain more than 450,000 Nvidia GB200s. When fully provisioned, power is provided by a combination of grid power and on-site natural gas turbines. Capacity is provided in eight separate buildings spanning 1,000 acres, all interconnected together to support a single workload. This site deploys the latest technology across AI accelerators, liquid cooling, and networking. More than 3,500 people work on site each day to deliver capacity at an unprecedented rate. Demand for AI continues to exceed supply. Oracle is committed to delivering the largest and most advanced AI clusters to support our customers all over the world.

That's a long way from writing code in my bedroom in college. What happened? I have no idea. Okay, okay, we're training, we are in the middle. We trained the very first version of Grok for Elon. We're training a number of other of these multimodal AI models. Almost all of these AI models are in the Oracle Cloud. I'll come back to that. We're probably involved, we're certainly involved in training more multimodal AI models than any other company. It's very exciting and it's daunting. I mean, the size of these projects that we're running, it's not just building the network of GPUs, the computer rooms and the networks and the cooling. That was hard, by the way. That was hard in the first place. Now we have to build the power transmission plants.

There's a natural gas pipeline that goes to the gas turbines that fires up the gas turbines and then generates electricity. That electricity then has to be moved to the data center. It's power generation, it's gas pipelines, power generation, power transmission, data centers, networks. Those data centers are filled with lots and lots and lots of complex software and a lot of very smart, hardworking engineers. These are enormous engineering projects, each and every one of them. What we're building, what we're building, what we're trying to build are these multimodal neural networks trained on all types of data. Textual data, images, data, audio, video, every publicly available piece of data plus synthetic data we train these models on. Some of the models are designed to be real time. Actually, Google has two models. One is Gemini, one is DeepMind. The DeepMind is highly specialized around molecular structures.

One in DeepMind won a Nobel Prize last year. Not this year, last year won a Nobel Prize on protein folding. The understanding, it's taking a molecule where you understand the chemical formula of that protein, a chain of amino acids, and what does it look like in 3D when you fold it up and it's no longer a string, it's now folded. That's a problem that we've been working on for a very long time called folding proteins. They solved it with the DeepMind model that Google owns when they bought DeepMind in London. Elon has two AI models that are very, very different. One is Grok multimodal AI model, the other is Tesla and it's a real time model.

Real time models have some different characteristics than, let's say, an anthropic which generates code, or a ChatGPT, which is solving a legal problem or a medical problem, something like that. If you're driving cars, things happen very fast. You have to have vision, you have to have cameras all over those cars. If something happens, you might be required to respond in a microsecond, in a millisecond at least. A microsecond is really fast. A millisecond in the car, a thousandth of a second. A ball suddenly is coming off a curb and a bike is following the ball and you have to see it, understand what's going on, and take evasive action so there's no accident and no one's injured. You have to build things differently.

When you can't afford the network traffic to go back across the network and talk to a computer, an AI model on a network that's far away, you need very, very low latency response time. That's why all the Tesla cars, all the Tesla robots have to have local compute in the car, local compute in the robot to make an immediate decision, a very low latency decision that's not required. For example, if you're writing code, I can tell you what code to write and you can take a moment to think about it and then give an answer. The real time models are a bit different than the models that don't require real time, where you have some time to reason and compute your answer. Both types of models are very important and both types of models are being built. These models do multi step reasoning.

What I'm calling reasoning, not long ago was called inferencing. People would talk about we got to train the model. There's one thing where we train the model and then when we're using the model, when the model's reasoning, we reduce that to just inferencing, a type of reasoning. That's no longer true. In the early days, that's kind of what models did, was inferencing. Not anymore. They reason like we reasoned. There's a list. They do deductions, they do inferencing, they do calculations, they have strategy, they have rules. All the techniques, the reasoning techniques that we use, they simulate and use. They think a lot faster than we do and solve problems a lot faster than we do, or they solve really complicated problems that we can't solve at all. That's what makes this so exciting and makes this so enormously valuable. These models can answer your questions.

They can generate computer code. A lot of the code that Oracle is writing, Oracle isn't writing. Our AI models are writing. We just tell the model what we want the program to do, and then the AI comes up with a step-by-step process to actually do it. We don't write the procedure, we declare our intent, but the model writes the step-by-step procedure. That thing we commonly think of as a computer program. They diagnose medical images far better than we do. They design drugs that we can't. There is a big gotcha on these models, and that is the models do not get trained on your private data because for some reason private people want to keep their private data private. That's not going to change. People also want these models to reason on their private data.

You know, have your cake, eat it too, whatever you want to call it. I want to keep my data private. I don't want to share it with anybody else. However, I'd like to use this enormously powerful tool to reason on my private data. That's one of the big things Oracle has been applying itself to in terms of solving that particular problem, and we have this new thing we're talking about this week here in Las Vegas: the Oracle AI Data Platform, the Oracle AI Database in the Oracle AI Data Platform. An interesting thing about the AI Data Platform is it includes a multimodal model of your choice. Whoa. A multimodal model of your choice. That's great. If you want to use Grok in the Oracle Cloud, you can use Grok. If you want to use ChatGPT, you can use ChatGPT.

If you want to use Llama, you can use Llama. If you want to use Gemini, you can use Gemini. We'll attach that model, the model of your choice, to not only the public data—the model's already connected to the public data, that's done—but we give you the ability to add your private data to the model's library of information and knowledge. The model can reason across not just public data, but also private data while keeping your private data private, not sharing it with anybody else. That's very, very important. It's not easy to do it in a highly secure way. It's not easy. If it was easy, a lot of people would have already done it.

As I said, the OCI includes all of the popular multimodal models you can mix and match, and we have the AI Database and the AI Data Platform that lets you add private data to the models. In fact, I'm going to be a little more precise this time, what it really does, and it's called RAG, by the way. You basically take a bunch of data that the model has not been trained on. By the way, that might be today's stock prices. I mean, the model doesn't know today's news. The model hasn't been trained on today's news. The model hasn't been trained on today's stock prices. Now, the model knows where to look for it, it knows how to ask to look at today's stock prices. It knows how to look at the ticker and get the very latest quote on today's stock prices.

You just put that information in a database that the model can access, and you put your private data in an Oracle database. The new Oracle database is called an AI database, not just because AI is fashionable. The new Oracle database is called an AI database because it has this RAG capability. It has the ability to take any of the data in the Oracle database and make it accessible to the AI model by vectorizing it. Since a lot of your data is in an Oracle database already, you simply ask the Oracle database to put that data in a format the model will understand, and that's called a vector format. The Oracle database will vectorize any data that you want to make available to the model. Then you can reason on it, by the way, but it's not just data.

It's not just data in an Oracle database that the Oracle AI database will vectorize. Let's say you have a lot of data in OCI Object Store or Amazon Object Store for that matter, and you'd like to make that data available to the model, to the Oracle AI Data Platform, no problem. The Oracle database can go into OCI Object Store and vectorize and create what's called a vector index to data in OCI Object Store. It can go into Amazon cloud storage and vectorize portions of that that belong to you and make that accessible for reasoning by the multimodal model. You're not restricted to data that's just in the database. The Oracle database can vectorize anything that's in an Oracle database, a different database, a different cloud, and make that data easily accessible to the AI model for reasoning. The reasoning is fascinating.

The first thing that Oracle did, the first project Oracle did in terms of taking private data and making it accessible to AI models is we took all of our customer data and we vectorized it. We basically used RAG and used RAG to make it available to the models. We started with customer data because we think there's nothing more important to us than our customers. Some people who are cynical would say there's nothing more valuable to us than our customers, but they go hand in hand. There were certain interesting questions we wanted to ask, we thought were extremely high value questions. There's a whole industry called Customer Relationship Management. Actually, it's not called that anymore. They changed the name to CX Customer Engagement Management. Whatever the name is, we know what the questions are.

We ran this project inside of Oracle, took our private customer data, put it in an Oracle database, vectorized it, and used RAG to make it accessible to models, to a multimodal model, an AI model. We asked the question, what Oracle customers are likely to buy another Oracle product in the next six months. Why should that be important to us? Specifically, what product? Each and every customer that's going to buy something in the next six months. Do you mind telling me what product they're going to buy? Would you answer that question? They're most likely to buy. One thing, and by the way, it's not just questions that this thing does. You can ask questions, you can prompt it and get answers, but you can also ask it to do things via agents.

You can create little computer programs, sometimes not so little, and ask the AI to actually do something to orchestrate some process. We said, okay, let's send a mail to all of our prospective buyers with the three best customer references encouraging them to buy. That request required the generation of a computer program called an AI agent that had to figure out, okay, you were going to buy this product, you're a bank in Switzerland, so we think the best references would be the banks in Switzerland that have already bought that product for you. Those would be the best references for you. All of the references would be customized based on what we know about you as a customer and the exact situation you're in, the business you're in, the products you have, the other banks you have good relationships with and you can call for a reference.

It's extremely interesting that it can solve a problem like this so quickly and tell us what the sales force should be concentrating on at Oracle over the next six months. Kind of amazing. Okay, so that application, that AI agent, if I can just back this up one, I'm going to have to get back up my slide once. Okay, the last thing, the last line. Send an email to prospective buyers with the three best references. From that single line, we can generate the AI agent. To actually do that properly, you can generate the AI agent, or if you wanted to do a little bit more, you could get even more precise, you could add more things to it, exactly what you want to do. What kind of letter do you want to send them? What else? Make the agent even more capable.

That's actually what we did. By the way, I don't know if you've heard this term. I thought it was a little strange the first time I heard it. Vibe coding sounds very Gen Z. What is the latest one? Z. Sounds very Gen Z, which is just say what you want the program to do, generate the prototype and try it out. You know, don't think about it too hard, just to kind of get a feeling for it and, you know, feel the vibe, I guess. That's actually. I mean, you can use English. You can generate computer programs directly from English. Personally, I've had debates with other engineers here at Oracle about whether using English as a programming language is a good idea, because English is notoriously imprecise.

Wouldn't we be better off if we want to generate programs to create a custom, highly precise declarative language for computer programming? That's what we did at Oracle using APEX. We added a declarative AI generation language to APEX for generating applications. There are plenty of people out there still working with English, and that's fine. It's up to you, we don't make those decisions for you, we just make sure that you have options. Most of the new applications that Oracle is creating now are agents, are AI agents that were generated, not handwritten, that were generated and they're connected by workflows. The interesting thing, when we generate these applications, there are no security holes in these applications because the application generator doesn't forget things and leave things out and doesn't make those kinds of mistakes. Every application that we generate is stateless and reliable.

In other words, if the computer that application was running on suddenly blows up, loses power, whatever happens, someone catches fire, that application can immediately restart in a different data center because it is stateless. Even though it stopped running in location A, it will pick up running in location B without missing a beat, without losing any data, without the customer ever perceiving it. When you're generating these applications, they have built-in backup, built-in zero, no single point of failure, built-in reliability, built-in security, and built-in scalability. All the applications are written, so this isn't a lot of people. These low code application programming languages are designed to write departmental things. Maybe they work for 20, 30, 40 users, but after that they start to slow down because they're really not designed to scale to millions of users. Because we generated, the design is always the same.

We always designed it for millions of users, even if there are only five. It will run faster that way and use fewer resources. The productivity gains we're getting from this is one of the reasons we feel so good about our efforts in healthcare, that we can rebuild the Cerner code, the Cerner code base. We can rebuild the entire Cerner code base, modernize it using AI, build a modern version of Cerner by generating it. We got all of the code for clinics operating already. Next year we'll have all acute hospitals. We have rewritten everything that Cerner wrote over what, over a quarter of a century, will have rewritten in three years. What ours does is much more than theirs ever did. We attack the problem not just as automating a hospital or a clinic, but automating the entire ecosystem.

Those are the kind of enormous productivity gains you get when you use these incredible AI tools. The example of rebuilding Cerner is fascinating because it's really not what we're doing. We're not just, yes, we're rebuilding Cerner, but we're also building accounting systems for hospitals designed for hospitals. HR systems designed for hospitals, and hospitals are very unusual. They're kind of 50, 50 gig economy, in and out. A lot of nurses, they'll work for one hospital, they'll work for private patients, they'll have schedules. You don't know how many nurses you need, or doctors for that matter, you need on Monday. It depends what you're doing, how many patients you're seeing, how many operating theaters are available.

An HR system for a hospital is very, very different, complicated. There are a lot of certifications that doctors, nurses, and other health professionals, technicians have to get in order to do certain tests, in order to do certain procedures, in order to handle certain patients. Our HR system has to deal with those certifications, schedule the training, schedule when they're working. They have very peculiar schedules. They trade shifts a lot, be flexible about doing all of that, paying them properly when they're working, a lot of overtime, but also understanding when they're only working two days a week here and four days that week at another hospital across town. We're building HR systems, accounting systems, and banking systems. This will be the one that maybe surprises you. I'll go into my example and banking systems that cater to hospitals, making hospitals loans based on their receivables.

I'm going to describe an AI agent. Our goal was to not just automate hospitals like Cerner did or another competitor of ours, automate hospitals and automate clinics. We thought, following Elon Musk's rule, that if we really want to be successful in healthcare, we can't just automate hospitals and clinics. We have to automate the entire ecosystem. Like Elon had to build a worldwide charging network or electric cars weren't going to work. He couldn't just make the cars and assume that Standard Oil would provide the fuel, which is what Ford did. To build electric cars, he had to not only design an electric car and manufacture batteries and put robots in the manufacturing plant and figure out how to sell cars on the Internet. He had to build a worldwide network of charging stations. He had to solve the entire problem.

He had to build a complete ecosystem for electric cars. If we want to automate hospitals and clinics, those hospitals and clinics are not going to be very efficient if the people who regulate those hospitals and clinics are not also automated. If the patients who are making appointments or receiving the results of a blood test, and all of it, if the patients are not also have access to that automation technology. You have to automate the patient, the provider, the payer, the regulator, the pharma companies, banks who finance the hospitals, and governments who can regulate the hospitals and collect information from the hospitals. You have to automate the entire ecosystem, then you will get a truly modern, efficient healthcare system. That's what we were after when we bought Cerner. As a first step, anyway.

One of the most interesting AI agents we've ever built connects providers to payers. This is a very interesting problem and it took me a while to fully grasp this problem when we were working on this and that the best possible care. What do we want the hospital to do? The hospital has to figure out what is the best possible care I can give this patient. That's kind of true, but let's say you're in the UK and the best possible care said that you have high blood sugar and I've got to put you on Ozempic or another GLP1. Guess what? The NHS in the UK doesn't pay for Ozempic, they won't reimburse you for it, and it's very expensive. Are there any other drugs that will help you manage your blood sugar levels? Yes, there are. Are they pretty good? Yes, those drugs are pretty good.

Will the NHS reimburse you for those? Yes, they will. What you're really doing when you're automating a hospital in the UK, what you're doing is you're trying to develop, figure, work with the doctor to come up with the best possible quality of care that is fully reimbursable if the patient can't afford to pay themselves. Those two things are tightly coupled together. To build this agent, it's pointless to prescribe Ozempic to someone in the UK who can't afford it, because the government is the insurance company in the UK and NHS doesn't pay for Ozempic. It's true today. This is what we had to build, and we had to build something that worked in the United States and in the UK and all over the world and solve this problem. The problem was best possible care that's fully reimbursable. That's what our goal was.

The AI model that we built first used RAG to access the latest medical literature and your latest test results in the EHR, vital signs and all of that information, all your blood tests, to assist the doctor to come up with the optimal, the best possible care. We had to know things like, there's a new clinical trial for this particular type of cancer that applies to this patient, that the doctor should consider putting this patient in that clinical trial. The AI model, not surprisingly, it will have all of the latest information about clinical trials, which drugs, which drug is working better than the other drugs for this particular patient the doctor is looking at. We'll provide information to doctors, the AI model will provide information to doctors.

As the doctor tries to figure out the best possible care for the patient, the AI model is also trained, uses RAG to access the latest rules and policies. Now, in the United States, those would be insurance policies and rules depending on what insurance you have. Do you have Medicare? Do you have Medicare, Medicaid? Do you have supplementary insurance? What are all the different things you have? I've got to figure out what is covered, what you get reimbursed. It's really those intersecting sets. What's the best care, what's fully reimbursable? I have to train the model on all of the insurance rules, just make sure that what the doctor is prescribing is fully reimbursable. I've got to catch little snags along the way. Actually, I do reimburse for Ozempic in the U.K.

if your body mass index is beyond this point, and I've got to make sure that the doctor knows that. I can let the doctor know, actually this case is an exception. This patient is eligible for Ozempic because they're overweight to pass a certain threshold, and the rule that was just changed says that they now can get Ozempic. I had to do that. The AI agent then reasons with all of this data to propose the best care, the best possible care, at the highest reimbursement level achievable. That's the goal of it in most places in the world where the government is the payer of healthcare. One last thing that we also did, and we have examples of this that we've experienced, where a lot of clinics, a lot of hospitals in the world, including in the United States, have lots of cash on hand.

If they haven't gotten the reinsurance reimbursements on time, sometimes they can't provide care to new patients. They're just running short of cash all the time. What the AI agent can do here is give the bank all of the information about a particular collection of reimbursements, assuring the bank that those reimbursements will adhere to all of the reimbursement rules and the clinic and the hospital will in fact be reimbursed. 99% chance, 95% chance they'll be reimbursed. You can discount a little bit and the bank will then loan on those receivables. It's a fascinating set of problems. When you look at the healthcare ecosystem, the financial aspects of the healthcare ecosystem, it's very expensive to run. There's a lot of administrative duties, administrative tasks that we can automate away using AI and let patients spend more time with their doctors who are worried about care.

We can figure out how to get the highest achievable reimbursement, how to get the hospital the cash that they need to continue operating. That's all done via automation. The doctor's time and the nurse's time is spent much, much more efficiently with patients. As I say, AI will make things so much better for all of us. Oracle Cloud is very unusual. Oracle, in the simplest sense, does infrastructure and applications. We do scaled enterprise applications and we do scaled AI infrastructure. We're the only ones, we're the only cloud that does that. The other big clouds, Microsoft, Amazon, and Google, really do not do healthcare applications, enterprise applications, big financial applications, they don't do that. In other words, they develop AI technology. They may or may not develop AI technology. Google does. The other two don't.

They may or may not develop AI technology, but they're not building large scaled applications where they're trying to automate industries or automate ecosystems using this technology. Our goals are different than those other clouds. We're a participant in creating AI technology and we're also a participant in using that technology to solve problems, problems in different ecosystems, in different industries. We're obviously very large in training the AI models. We have those models, a bunch of those models, some of which we trained, some of which we didn't. We have those models in our cloud for you to use to solve your problems, for you to do AI reasoning on your private data to solve the problems you want to solve at your company. We have AI code generators. Anthropic is very, is the thing they're most famous for is Anthropic is code generation.

We've been doing this for a long time. We think we have that. Our new APEX code generator is. One thing I can say about APEX, everything, every application it generates is scalable, secure, reliable, everyone. We've been doing that for a long time now. We're doing complete code generation using AI and APEX. We are the only ones that are building suites of applications to modernize not just industries, but complete ecosystems. Healthcare is one example, but utilities is another. We're taking on entire ecosystems, which makes things work much more efficiently. I mean, you're only as strong as the weakest link in the chain if you have to interact with a regulator, let's say a regulator that does clinical trials. The clinical trial regulator says, okay, once you finish your clinical trial, print out all the results and send it to us in boxes of paper.

I won't mention any names, but that happens all over the world. It makes new drugs incredibly expensive and take forever to come out. It's a huge problem. You have to automate these entire ecosystems as a goal. Then agents, you have to build these complex processes, these robotic pieces of software called AI agents that not only automate processes within a company, but also automate processes between companies. How one company talks to another company, how a hospital talks to a bank. Okay, that's phase one of my presentation. Be serving dinner. That's why I arrived a little late, because this way we can go straight to dinner when we're done. This is looking at, went into kind of how the AI models work, how they're built, how Oracle is different.

I'd like to just take a look at the world as I think it's going to be because of AI. I think, by and large, we are going to live much better lives. Healthier, longer lives, eat better food, live in better houses. It should be a much better world because these tools are so enormously powerful, but some of the things they'll do is a little bit shocking. These are some of the things we're working on. I could go through them on the line. We're working on biometric. We can prevent identity theft using AI to stop it. No more logging on, no more passwords that get stolen. No more intrusions, no more data that gets stolen. No more credit card. No more, you have to send in your credit card and get a new one.

We can make them all credit proof if that's what you want, or fraud proof if that's the kind of credit card you want. I don't know of anyone who likes spending time in the hospital. The hospitals have figured out the sooner they can get you out of the hospital, the better it is for them. Also, because some of the nastiest bugs, some of the nastiest pathogens are lurking in the halls of hospitals. The quicker we can get you home, patients happier and you're safer at home. We can build these IoT medical devices where we can monitor you at home, as well as we can monitor you in the hospital. Even if in an emergency, you're being transferred back and forth, the ambulance is also always connected. Your home, if you had a patient at home, they're always being monitored by hospital staff.

You've got a patient being transported in an ambulance, the hospital staff. There's an audio, video, digital connection between the ambulance and the emergency room diagnostic images when AI reads them. I remember one time I flipped my motorcycle upside down. Don't ask what was I doing? I wasn't that young either. I don't even have that as an excuse. I broke a lot of—I landed on my right side and I broke eight ribs. An MRI. I remember going into an MRI and they were counting, 1, 2, 3, 4. What are you doing? I'm counting your broken ribs. Oh, great. I was having an MRI, but the only thing they did was count my broken ribs. There is all this other data that that MRI produced. No one looked at it. That's always the case when you get one of these scans.

You're looking for one or two things, and the rest of the stuff you just ignore. AI will find things that no one was looking for. Plus, it's just more precise and more accurate. I'm going to actually go. If I do this, I'll finish all the slides on this one page. I'm going to just do this. Identity theft. Excuse me? We said earlier, early slides, I mean, AI knows who you are, that we recognize your face, your voice, your fingerprint. When you log in, you sit down at the computer, say, hi, Sephora, you know, what do you want to do today? There's no—passwords are insane. Passwords get stolen. People write them down. The fact that your password has to be 17 characters long with at least two underscores next to each other. I mean, what are you out of your mind?

Who comes—you think this is a good idea? The only way I'll ever remember this is I write it down and put it on a sticky note right next to my computer. Why? This is just idiotic. No passwords. It's all biometric. Much better for everybody. Better data privacy. Credit cards if you want them. We will have optional credit cards that are biometric so no one can—it's very hard to imitate people. This has dramatically reduced credit card fraud. The banks pay for all the credit card fraud. The banks don't have to pay that. Your interest rates are going to go down. It's going to be better for everybody. It's going to save a lot of money and keep your data private. Patient monitoring. I mentioned this.

We're going to have these low cost, and I'm going to come to the low cost, how they're going to be so low cost. We're going to have these fabulous medical devices that we can mass produce that are higher quality, but all medical devices should be attached to the Internet, and they should go into a secure database where only you, and it's your data, and you can decide who gets to see it, your doctor, a health professional who's monitoring your care, and you can keep it private. That data is immediately accessible by your doc, and if your doc has set an alarm, if your blood pressure drops below a certain threshold or goes above a certain threshold, they want to be immediately notified. You can do all of that. You're going to get much better health monitoring at home, in the ambulance, wherever.

As I say, when moving between your home and the emergency room, the ER doctors are talking to the EMTs in the ambulance. Believe it or not, we're building one. We're actually building these prototypes. Will we mass produce an ambulance? I have no idea. If you told me a couple of years ago we'd be building billion watt power plants, I would have said you need to get more rest. That's not going to happen. Yeah, we're looking at doing this because the ambulance is connected, and it's loaded with AI, and it's just a much safer way to transport patients. The diagnostic imaging, my wife was pregnant, we were living in Hawaii at the time, and she went in for a sonogram. The tech was, two things were crazy.

One is the tech took a ruler and was measuring fetal development with a ruler, measuring how big the skull was and how long the spinal cord was on the screen of the sonogram. I said, whoa, whoa, whoa, whoa, whoa. That's like a two dimensional ruler measuring a three dimensional shape inside, floating in a fluid. Are you kidding? Who thinks this is a good idea? The computer should be, we can do that with AI. We can do this very accurately with the computer. Even with primitive AI, we should have been able to do that. It then got worse when the tech, we were on the island of Lanai and the doc was actually in Honolulu and she held up her iPhone to the sonogram screen so that the doc could see the image, the fetal image on the sonogram. Oh my God, what did you.

Oh my God, no. I mean, you can't record this in high resolution and transmit it digitally. You're FaceTiming the image over. What the hell is. No, actually, I remember saying one thing. Look, I said to the tech, look, I promise to fix this. I promise to fix. This is awful. I can't believe this is going on. Yeah, no, of course, AI is 3D vision. We can measure accurately fetal development on the sonogram. We again find things doctors aren't looking for imaging right now. One of our partners looks at tumor biopsy slides and can diagnose the cancer from the image in a few minutes. We're going through the entire process. It might take them to do all the genetic testing and all of these other things. It might take a week or two, a week or two of worry, and a week or two without treatment.

AI is going to allow us to get a response very quickly. Either say, you're fine, everything's good, or no, you need to start this drug right away. In either case, when you get better. In both cases, we get better outcomes. This is very interesting. This is a device that we're working on which is called a metagenomic testing device, our ability to identify pathogens when someone gets sick. We have a testing methodology called PCR, that if we suspect, you have influenza A or influenza B or this, this coronavirus or COVID-19, we can test for some number of known, a panel of some number of known respiratory viruses. If you have something that's odd, it comes up just as PCR negative. We don't know what it is. What we really want to do is genomic testing on that.

Before we can do genomic testing on it, we have to culture it. We have to culture it and wait several days. It could take a week or two weeks before we know what you had. Either it went away or you did if it was particularly bad. This is a new sensor, a new sensor that will simply do gene sequencing. It will do gene sequencing of everything in the sample. You take blood, and obviously in your blood are your own genes, your own genes. Included in your own genes are something called ctDNA, circulating tumor DNA. In everyone's blood, if you have cancer, even a stage one, early, stage two cancer, you have small fragments of circulating tumor DNA that we can discover by sequencing every gene, sequencing everything alive in your blood.

The problem with the circulating tumor DNA, and people have been trying to work with it in the past, is your immune system will cure a lot of cancers without you ever, ever knowing you have them. The immune system clears up a lot of cancers before you're ever symptomatic. If we keep telling you, oh, my God, we found this cancer, we need to start treating you. In fact, no, we don't. Your immune system is going to clean that up, do absolutely nothing. The false positives are deadly in this. However, with AI now we can look at the fragments and distinguish between false positives and a real serious problem that you need, you should start treating immediately, early. This has the promise of giving us very, very early cancer diagnosis, which everyone knows leads to a much higher likelihood of a positive outcome with the cancer.

It also will allow us to find any bacteria, any bacteria, any fungus, any virus, any living organism that you're infected with, any pathogen that you're infected with, and tell you exactly what that pathogen is, even if it's novel, like COVID-19 was novel. We know how to treat it. It'll tell you if that pathogen is resistant to certain antibiotics and which, specifically which antibiotics it's resistant to and which antibiotics we should treat you with. We actually have a partner here, I know, I think that went on earlier that talked about that, working on that same exact problem, which is very, very important. If you imagine this device being a low-cost device that's in the pathology departments and hospitals all over the world, we can do this one blood test and find whatever pathogen you're infected with.

If we had that, we never would have been caught off guard. With COVID-19, we would have had early warning. We would have discovered it from far before we discovered it. Those metagenomic sequencers would be the perfect early warning system for pandemics. That's why we're working on them, and that's why we need them. Building all of these medical devices, building them reliably. If you want to put a metagenomic sequencer in every, every hospital all over the world, or most of the hospitals all over the world, they can't cost a million dollars. They can't cost $100,000. You have to make them cost effectively. You have to mass produce them. You have to make them in robot factories. If you make them in robot factories, you get much higher quality and dramatically lower costs. I think we have a video.

This is a disc where the test, you actually put the sample into the disc, spin the disc, and run all of these tasks on the disc. Actually, I think that video, when I saw it, lasted three minutes. Maddie told me, no way am I putting that whole video in your presentation. It is remarkable. There are no people in the room when the disc is being built. Here's another one. You'll be happy. We don't have a video. We just have a couple of pictures. Growing inside reduces the amount of water that we use to grow food by 90%. That in itself is essential because we are running out of food. By the way, we're running out of food in the world. I think in 2050, Africa will be our most populous continent. Think about that. Asia is by far.

Asia has India, China, you know, those are big countries with a lot of people. Africa will be larger. We need to produce much more food than we currently do. We're going to run out of water. We're going to run out of arable land. We can't keep taking habitat and converting it to farmland. We have to be much more efficient. By growing in greenhouses and moving plants around, plants only need a lot of room for the, you know, right before, a few weeks before they're harvested. Otherwise, they're going to grow in much more confined areas. If you can move the plants around, you use up much less water, much less space. You save habitat. If you're growing indoors, you can grow by urban centers.

I mean, I don't suggest you put a greenhouse right in the middle of New York, but you can put it 50 mi away from New York, and you're growing near population centers. The CO2 output for transporting the food to population centers is greatly reduced. The food is much fresher. Again, in a greenhouse, there's a harvest every morning, and it's delivered to the grocery that afternoon and can be eaten that evening. The food is much fresher. It's lower cost, it's more nutritious, it's tastier. We're actually building these things, these robotic greenhouses. There should be a picture coming up. Yeah, that's real. Just hold that. This is also, as I pointed out to Elon, this is also a Martian habitat. This building, which is very large, you can imagine as a greenhouse, or just ignore. That yellow thing kind of on the lower part is an overbot.

That's a rail system that moves the plants around from one location to the other. No human beings are allowed in the growing area because human beings contaminate the growing area. We literally lift the plants up and move them into a harvesting area where people are allowed. We don't allow any. Also, the growing area is very, very high in CO2. It's very humid, it's very unpleasant for people. It's very, very high in CO2, which is good for plants, not so good for human beings. If you took that same building, and the building, by the way, there's no structure. It is an air pressure building. The atmosphere, it's a positive air pressure. Basically, think of fans keeping the pressure inside the building higher than the pressure outside the building.

That's what holds up the roof, which is made of ETFE, which is the most sunlight transparent material known to man. Also quite strong. You could take, and those are steel cables. Those are steel cables in the arches anchored to a concrete footing around the base. Literally, you have a robot dig the footing, you snap the steel cables onto the fiducials on the footing, and then you turn the fan on and you inflate the building. You fold the building up. The building is fabric with steel cables. You fold it up in nice packages and you transport it to where you're building it, or you transport it to Mars on one of those big rockets, and then Elon can build his house right in the middle of that and have beautiful rose gardens and all of that other stuff. It'll be lovely. I'm not going.

I will go to this one, which is, the first ones are in California and Texas, which is way closer than Mars. Here's another picture of the same building. They're big, and the green areas are the harvesting areas. The walls lift up where the trucks arrive to deliver the food. This is going to be shocking. The first thing we did, and we've actually done this. We've actually done this. It's actually a company that I'm involved with called Wild Bio. It's part of the Oxford company. I've got an institute at Oxford called EIT. The first time I've ever put my name on something or the family name on something. EIT. One of the companies we have is this company called Wild Bio. What they did, the first thing they did was they modified wheat plant, which is a grass.

They modified wheat to have it produce 20% more food per acre. More grain per acre, which seems like we're running out of food. That seemed like a good idea. Now, it's really interesting. If you produce 20% more grain per acre, what wheat does, basically it takes CO2 and sunlight, mixes them together to create food. If you're growing more grain, you're consuming more CO2. Where that CO2 ends up is really, if you have AI designing the wheat, is really up to us. We build this wheat that's much more efficient with photosynthesis than conventional wheat. Once we've absorbed the CO2 into the wheat, we could choose to take that CO2 and convert it into calcium carbonate. By the way, that's exactly how coral reefs get built.

A coral reef is converting CO2 and sunlight into a structure, into a mineral called calcium, an inert mineral called calcium carbonate. We grow a lot of wheat around the world. Every spring we plant, you know, several Amazon rainforests worth of wheat. If you want to, you can not only produce more grain, you can convert more CO2 directly into calcium carbonate, therefore removing it from the atmosphere forever. If you want to manage, I know there are all these interesting ideas on how to manage the climate and manage the atmosphere and manage atmospheric CO2, but in this particular case, you could remove, if you wanted to go from the current level of 440 parts per million of CO2 in the atmosphere, which some people think is too high, and reduce it to 400 parts per million.

You can do that simply by having the wheat and the corn and the soybeans and whatever producing, you know, converting CO2 into calcium carbonate. You can manage the CO2 level in the atmosphere to whatever level you deem appropriate. If you think the sweet spot is 400 parts per million, right now, someone will say, no, no, we want to get rid of all the CO2 in the atmosphere. Pack a lunch, because if you get rid of all the CO2 in the atmosphere, all the plants will die on the planet. Don't go to zero. That's a really bad idea. The sweet spot in terms of stabilizing the climate probably is going from 440 to 400. It's something we can do, and it's basically free. Basically, there's no cost in doing it. It's just a natural process called biomineralization. We could use our food crops.

We could actually increase the food yield while lowering CO2. This is what I mean by AI. AI is a pretty amazing tool. There are a lot of things, a lot of problems we can tackle that we've been unable to solve for a very, very long time. It's very, very contentious within our society. You absolutely have the ability to do this. Corn. We're also working on corn. Another huge problem with agriculture is nitrogen fertilizer. You fertilize all these crops to increase the yield. The problem is fertilizers are made of nitrogen. It rains and you get huge nitrogen runoffs into river basins and into the ocean. That pollution does a lot of damage in our environment. Rather than using fertilizer, nitrogen fertilizer, to nourish the plant, the atmosphere has got a huge amount of nitrogen in it.

Why don't you simply engineer the plant to take the nitrogen directly out of the atmosphere? We know how to do that. There's an enzyme in the world called nitrogenase, and nitrogenase quite literally takes atmospheric nitrogen, does it with soybeans, for example. It's unique to soybeans, takes atmospheric nitrogen and uses it as a nutrient for the plants. You don't have to use nitrogen fertilizer. You can get rid of all the nitrogen fertilizer in Africa. No one can afford. I shouldn't say nobody. A lot of farms can't afford to use nitrogen fertilizer. Even the ones that can afford to use nitrogen fertilizer, it's a waste of money and it is damaging to the environment. You can engineer the plant to get the nitrogen directly from the atmosphere.

The plant is just as tasty and just as nutritious and just as healthy getting the nitrogen from the atmosphere as getting the nitrogen from fertilizer that's been added to the soil. Another problem AI makes it easy for us to solve. You're going to be very happy that last slide. This is my last slide with words on it. I have one more video, one more picture, and then the three of you who are going to stay can ask questions. Autonomous drones. Anyone who's looked, we've seen the way drones have been developed in Ukraine for military purposes. Fortunately, drones have very wonderful uses beyond how they're being used in Ukraine, the war in Europe, which is just terrible.

We built an air traffic control system for drones, and we're actually using drones to deliver blood samples from clinics and taking the blood sample by drone to testing laboratories. We built what we call an RFID specimen vault, which we put an RFID tag on, which identifies. No one knows this is Larry Ellison's blood or whatever. They just know there's an ID tag on the blood, and then the test results go into the cloud and eventually they make it back to my doctor and to me, the results. Otherwise, in the chain of custody, no one can distinguish who. My personal privacy is not compromised at all by doing this. Also, the other problem is sometimes they do a great job of protecting your personal privacy by losing your blood sample or thinking it was somebody else's blood sample.

That's not a great way to protect our personal privacy. We built a specimen. Another thing we built are these specimen bolsters to take samples from the hospital, from the clinic to the lab where the results then go into the cloud. The other thing that the drones can do is they can detect forest fires immediately with infrared cameras. They can even figure out who set the forest fires. Tragically, the Palisades fire. A number of the fires in California were set by arsonists. I mean, unbelievable tragedies that we can, we can detect the fire immediately and start to fight the fire immediately. If someone set the fire, we can figure that out too. We shouldn't have police cars chasing other cars around those high speed chases while they look, I mean the videos look kind of cool.

They are very dangerous for not just the police, but for civilians and cars nearby. We can have drones follow those cars. It's way better. Okay, I'm going to now go to my last picture. That's the RFID specimen vault over there. Last video will be coming up. There it is. Sure enough. You can deploy these. It'd be great. In the Palisades, it's the dry season. You send the drones up, you can have a series of these cars. You've got a lost hiker out in the wilderness, something like that. They're portable. I think it's going to now land and then if it gets down safely, I will take my first question. It's a video. It's going to get down safely.

Powered by