Welcome everyone to Google Cloud Next. Just one year ago, we stood on this same stage and promised a new future for AI. Today, that future is running in production at a scale that the world has never seen. Over the last year, we didn't just see adoption, we saw transformation. Nearly 75% of Google Cloud customers now leverage our AI products to power their businesses. We have thousands of agents and services across every industry, reaching billions of people through the global scale of our partner network. You have moved beyond the pilot. The experimented phase is behind us, and now the real challenge begins. How do you move AI into production across your entire enterprise? The answer is a unified stack. You cannot deliver AI by piecing together a puzzle piece of fragmented silicon and disconnected models.
To drive real value, you need an architecture where chips are designed for the models, models are grounded in your data, agents and application are built with models and secured by the infrastructure. Google uses this exact same open stack to reimagine how we serve users using our AI tools across Search, YouTube, Chrome, and Android. To take you inside that journey, let's hear from Sundar Pichai.
Thanks, Thomas, and hello everyone. We are so glad you're here. The pace of technological change has been faster than I've ever seen, and I've been around technology for a while. In fact, this Sunday, I'm celebrating my Googleversary. 22 years, and I'm still feeling lucky, and I hope that luck extends to all of you in Vegas this week. I was drawn to Google all those years ago because of its ambitious mission. From the web, to mobile, to AI, every platform shift has given us new opportunities to advance our mission and to help our customers and partners achieve yours. As we move into the agentic era, we are taking this to the next level. We are making big investments now and for the future. In 2022, we were investing $31 billion in CapEx. This year, we plan to invest between $175 billion-$185 billion in total CapEx.
A nearly 6x increase in just four years. For 2026, just over half of our machine learning compute is expected to go towards the cloud business, so it'll greatly benefit all of you. These investments are how we continue to stay at the frontier and ensure you're at the cutting edge as well. A big focus for us is to always be customer zero for our own technologies, so we can be a better partner to all of you. Let me share a few examples of how we are rewiring our work with AI. First, coding. We've been using AI to generate code internally at Google for a while. Today, nearly 75% of all new code at Google is AI-generated and approved by engineers, up from 50% last fall. We are now shifting to truly agentic workflows.
Our engineers are orchestrating fully autonomous digital task forces, firing off agents, and accomplishing incredible things. As one example of how we are using agents, we recently executed a particularly complex code migration. Anyone familiar knows these migrations can take a while to get done. We created a system of agents taking on three different types of roles: planners, orchestrators, and coders. Working together with engineers, we completed the migration 6x faster than we could have a year ago, and now we are applying that approach across our entire development cycle, from experimentation to testing, to evals. It's not just developers who are infusing AI into their workflows. Our marketing teams are fundamentally rethinking the creative process from concept to launch. Historically, adapting a campaign for every audience, channel, and country took weeks.
For the recent launch of Gemini and Chrome, teams used our models to rapidly generate thousands of variations of our creative assets. This enabled personalization at a massive scale. It also led to 70% faster turnaround and a 20% increase in conversion. We are not only getting to market faster, we are also doing it more effectively with AI. Our security teams are also seeing significant gains with AI. Each month, our teams receive unstructured threat reports at a scale that would take thousands of hours to review. A nearly impossible task. Today, our security operations center agents automatically triage tens of thousands of unstructured threat reports each month. By accelerating the extraction of critical intelligence and filtering out the noise, it's reduced threat mitigation time by over 90%. We are more on the front foot than ever before.
Those are just a few examples of what we are trying at Google. We are moving in a bold and responsible way. There's a lot of change happening, and it can feel like we are in the messy part of the innovation cycle. But we are starting to see the foundational building blocks coming together, unlocking a new wave of innovation. One thing that is super clear, we are firmly in the agentic Gemini era. Last fall, we introduced Gemini Enterprise as a front door to agentic AI in the workplace. What we have seen in just a few months since launch is how every employee in every organization can become a builder. This is an incredible shift. We are accomplishing bigger things faster, but this comes with complexity. The conversation has gone from, can we build an agent, to, how do we manage thousands of them?
Today, I'm excited to announce our new Gemini Enterprise Agent Platform. It provides the secure, full-stack connective tissue you need to build, scale, govern, and optimize your agents with confidence. Think of it as mission control for the agentic enterprise to help move your organization into the next phase of the agentic era. I'm going to turn it back over to Thomas to share more, but first, thanks again for being here. Together, we are building a blueprint for true business transformation, and I'm excited to see where it takes us next. Thomas, back to you.
Thank you, Sundar. Today, we're taking the next step in bringing Google AI to every employee and every workflow. Gemini Enterprise is now the end-to-end system for the agentic era, the connective tissue between your data, your people, and your goals. It transforms disconnected processes into a single intelligent flow. This is our blueprint for the agentic enterprise. It's also the answer to that fundamental question. Intelligence plus automation must deliver value. To make this work, you need context and action. Intelligence comes from your data. Automation is driven by agents. To solve this equation at scale, you need a complete integrated system. Today, we will show you the layers of that system and all the innovations we're introducing at every layer. Starting with the AI Hypercomputer, the purpose-built foundation optimized for the physics of the agentic era.
The Agentic Data Cloud, the engine that provides agents with trusted business context. Agentic Defense, the autonomous protection that secures your entire AI life cycle. The Agentic Platform, the system to build, deploy, and manage agents. And finally, the Agentic Task Force, the pre-built specialized agents that we offer that are ready to transform your business. Let's dive in. Now, the Gemini Enterprise Agent Platform is the environment where your business logic, your data, and your models converge to drive autonomous action. It expands on the previous capabilities of Vertex AI and brings new capabilities to enable your teams to build, scale, govern, and optimize agents with the same architectural rigor you apply to your most mission-critical systems. At its core, we build in our support for state-of-the-art models. Our most advanced reasoning model, Gemini 3.1 Pro, is available in preview. It's optimized for complex workflow orchestration.
It bridges the gap between strategy and autonomous execution, interacting with your APIs and systems with minimal tuning. Industry leaders including Databricks, JetBrains, and Replit have chosen Gemini 3.1 Pro. We're also announcing Gemini 3.1 Flash Image, also known as Nano Banana 2, for high-fidelity visual assets. Veo 3.1 Lite, our most cost-effective video model designed to build high-volume video applications. And Lyria 3 Pro, our state-of-the-art model for enterprise and professional-grade audio and music. All of these models are available in preview. Finally, we support all the leading models from Anthropic, including Claude Opus, Sonnet, and Haiku, and today we're adding support for Anthropic's Claude Opus 4.7. While all of these models represent a massive leap in intelligence, their true value is realized when they're operationalized to solve mission-critical problems.
Earlier this year, we announced a monumental partnership with one of the world's most iconic brands that will bring the power of our technology to users everywhere around the world. We're collaborating with Apple as their preferred cloud provider to develop the next generation of Apple foundation models based on Gemini technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming later this year. Leading organizations aren't just using Gemini Enterprise to work faster. They're using it to redefine what their businesses can do. Citi Wealth, in partnership with Google Cloud and DeepMind, today unveiled Citi Sky, an always-on AI-powered member of the Citi Wealth team that brings Citi's global intelligence to clients' fingertips. Citi Sky will provide an exemplary experience allowing clients and colleagues to ask more of Citi Wealth, whenever they need it, and in multiple languages.
Honeywell generates billions of insights from managing buildings by training digital twins on over a million product specifications. Liverpool is bringing their signature in-store service online, projecting a 10x return on investment on their new shopping assistant. And, for a mission that's quite literally out of this world, we were very proud to partner with NASA to use Gemini Enterprise agents to power flight readiness and ensure astronaut safety for Artemis II, which set the human space flight record for traveling the furthest distance from Earth. Now, we've built the agent platform to manage the entire life cycle of an agent. The low-code agent studio enables every employee to build and deploy agents using natural language. It grounds LLM reasoning in your specific business rules, bringing predictable, autonomous action to every workflow at scale.
To manage these assets, the agent registry provides a single point of control, indexing every internal agent and tool across your organization to ensure they're discoverable and governed. Similarly, our skills and tools registry allows you to define modular, reusable packages of instructions, scripts, and resources to teach agents to perform specialized tasks or repeatable workflows. We also expose skills for every GCP service and every service in Workspace in our skills registry. All of this is also supported by our agent marketplace, allowing you to search and deploy specialized agents from our global partner ecosystem directly in Gemini Enterprise, including Atlassian, Box, Lovable, Oracle, ServiceNow, Workday, and many, many more. Finally, with our native integration of Model Context Protocol, or MCP, you can connect our agent platform to any MCP server.
We're also exposing all of our GCP services as MCP to allow you to interact seamlessly with GCP from any agent. In addition to developing this agent platform, it also provides the framework to orchestrate and scale your entire agentic workforce. We enable agent-to-agent orchestration, allowing agents to seamlessly delegate tasks from one to another, including support for complex generative and deterministic orchestration patterns. This helps ensure that for your critical workflows, such as those that need compliance, your agents can follow well-specified paths every single time, guaranteeing predictable outcomes. Orchestration flows and agents can respond to events. These events can be real-time, they can be schedule-based, they can be trigger-based, or they can be batch inference job. We're bringing zero trust verification to every agent and every orchestration step.
Now with agent identity, every agent has a unique cryptographic ID and well-defined authorization policies that are traceable and auditable, ensuring you can track every action in your company and manage all your agents. They're also centrally managed through our agent gateway, which provides a single command center for policy enforcement across the organization. Paired with Model Armor, you protect your models, you protect your proprietary enterprise data from threats such as sensitive data leakage. This integrated approach, from secure sandboxes to a single management console, provides the visibility and isolation you need to run your most sensitive workloads with a high degree of confidence. The optimization and observability are also built into the fabric of the platform. Agent observability delivers granular instrumentation, allowing you to visualize the full execution path of any agent using OTel-compliant telemetry.
You can retrieve traces, monitor tool use, quickly diagnose reasoning loops with fine-grained logging. All of these capabilities that we've just detailed, the orchestration, zero trust security, and developer tools form the core engine of Gemini Enterprise. While the agent platform is where your technical teams build and govern agents, the Gemini Enterprise application is the primary environment where your business actually operates. It's that new front door to AI for all your employees, turning complex agentic capabilities into a simple new way to work for every employee. By unifying data across your entire stack, including Google Workspace, coding tools, and enterprise systems, we've created a single intelligent flow for an entire organization. To give your teams a permanent memory, we're introducing Gemini Enterprise Projects. Projects give your agents a high-fidelity workspace and the ability to use Deep Think to solve your most complex business challenges without context pollution.
We're also adding Microsoft 365 interoperability, allowing you to export the docs and slides you've created within Canvas into common Microsoft Office formats. Leading innovators already use this platform to redefine their industries. Let's hear from one of them, Virgin Voyages.
[Presentation]
The era of the pilot is over. The era of the agent is here, but the true power comes from how it changes your workflow. Let's see it in action. Please welcome Erica Chuong.
Thanks, Thomas. Gemini Enterprise is a platform to build, manage, and interact with agents. I'll demonstrate how it can orchestrate multiple agents across platforms, share context for seamless handoffs, and streamline workflows. Imagine I work for a global furniture retailer. Here's my personalized homepage, where I can interact with internal business context and external sources in a single pane of glass. The agent gallery hosts my approved selection of agents, including built-in ones by Google and my company's agents, like this one for price and margin optimization, which autonomously orchestrates across agents, tools, and sources. Now, let's see Gemini Enterprise in action. To bring some less popular product lines back to life, we'll ask our agent to analyze current interior design trends, identify dead stock in our warehouse, and orchestrate a relaunch campaign. With that one prompt, multiple agents complete a series of actions in minutes instead of hours.
My market research agent, powered by Deep Research, analyzes the latest Google Search information alongside my own sales and CSAT data. My Data Insights agent is connecting to our global product catalog, regardless of location or format. It knows that dead stock refers to stale inventory, and it uses our Agentic Data Cloud to identify the right data sets. My product strategy agent pulls everything together. I'll go ahead and approve this plan. This can take a few minutes, so we'll fast forward. Okay, here are the highlights. Organic modern is a huge trend, and customers are paying top dollar. Our Tuscany collection has leftover inventory that actually fits right in, but sales haven't been great. The recommendation, rebrand and reprice it above our current discount, but below competitor pricing. All of that with one simple prompt.
I can even see the sources that it analyzed to make these recommendations. Gemini Enterprise is also suggesting a new landing page and new media to put front and center. I'll ask my product strategy agent to generate some videos. This part can also take a little bit of time, so we'll look at some pre-generated options. This looks amazing. Veo 3 placed the exact pieces in a whole new organic living style space. Now, let's get that website updated. I'll ask my dev agent to coordinate with our engineering team. It connects directly with Jira to give a developer full context for a seamless handoff. And on the developer's end, I'll automatically get a notification right here in my Google Chat with that Jira ticket. I'll get this going with CLI. Here, I'll ask my dev agent to start working on that Jira ticket.
It's going to give me an overview of the strategy, the assets, and how it plans to execute on this webpage using all of the context from the previous Gemini Enterprise session, the ticket, and our brand and coding guidelines. This is really easy. Let's go ahead and get this built and deployed. From here, this can take a little bit of time. I'm sure the devs in the audience know that. I gave it a little bit of a head start. Let's check out that final product. This looks perfect. Now, let's head back over to Gemini Enterprise to prepare our stores for the launch. Our store operations team would usually handle this piece, so I'm in a brand-new session. I'll ask for a deck to get our regional distributors up to speed.
Gemini Enterprise uses my company's context to find exactly what I need to know about that launch, how I've managed these in the past, and my team's sales goals. Then, it works with the Google Workspace agent to create a branded deck in my personal style. Again, this can take a minute or two, so we'll look at one I generated ahead of time. Get this. With our brand-new Canvas mode, I can make edits and collaborate in Google Slides without ever having to leave Gemini Enterprise. Overall, I think this looks great, but I actually prefer the word reimagined here. Perfect. Then I'll go ahead and share this with my team. It looks like some of them are already jumping in. Let's recap.
We saw Gemini Enterprise orchestrate multiple agents with a single prompt, share context, and generate and collaborate content with Google Workspace, all supported by our enterprise-grade governance, which gives organizations the power to manage and secure agents at scale, all while maintaining control over their business data. That's the magic of the Gemini Enterprise Agent Platform.
Thank you, Erica. Thousands of companies around the world and across industries are choosing Gemini Enterprise to transform their business. We're seeing a profound shift. Companies aren't just redesigning workflows, they're turning their everyday employees into AI builders, empowering them to solve their own hardest problems. Signal Iduna is redefining insurance in Germany with Gemini Enterprise. They hit 80% adoption within weeks, making it their front door for AI. 11,000 employees are now building specialized agents. Their health agent automatically verifies coverage against a century of complex policy data, driving a 400% surge in weekly users and providing answers 37% faster. Bosch is adopting Gemini Enterprise globally. From finance to engineering, employees are deploying custom agents so that they can reclaim time for complex research. KPMG reached 90% adoption with over 100 agents in just the first month. The American Society for Clinical Oncology is now delivering cancer expertise faster.
Merck is bringing Gemini Enterprise to 75,000 employees to support their purpose to save and improve lives. And Walmart is rolling out Gemini Enterprise to help their store leaders spend more time with customers. Let's take a look.
[Presentation]
This winter, ahead of one of the biggest moments in global sports, our world-class engineers collaborated with world-class athletes. We're incredibly proud of how Google Cloud is helping athletes push the boundaries of their sport with AI.
[Presentation]
Please welcome three-time Olympic gold medalist, entrepreneur, and snowboarding legend, Shaun White.
What's up, Google Cloud Next! I love seeing it snow in the desert. This is wild. All right. Look, back when I was training, our tools were camcorders and basically guesswork. You'd land a trick and watch it back, and you'd be looking at it thinking, "How can I make that trick better?" Over time, I've seen the tricks get bigger and the walls get higher, and now Google Cloud is bringing AI to the mountain. This winter, I worked with some crazy smart Google Cloud engineers on some awesome tech. To show you how rad this is, give it up for Jason Davenport.
What's up, Shaun?
Come on in. What do we got for everyone?
All right, Shaun, this is going to be super fun.
Okay.
Let's unleash the power of Google Cloud AI into the half pipe. And to do that, we're going to pull up a trick from your 2017 Burton U.S. Open.
Oh, wow.
I'm sure you remember this .
It's a throwback. It's a throwback.
It is. Let's analyze this clip for the audience, and so they can see all the cool things that AI sees.
All right.
All right, let's start it off. First, this trick is over in under three seconds.
Mm-hmm.
You are literally, it's a split-second blur. For everyone watching, what trick is this?
This is a switch cab double flip 1440.
That is insane.
Yeah.
For everyone else, what does that actually mean?
Basically, it's my unnatural way of riding. I do four full rotations with two flips in the middle.
I tried stopping it.
You got that.
Using the camcorder, and I couldn't do it. Let's slow this down, right?
Yep.
To do this, we need to analyze this frame by frame and see what Google Cloud sees.
Exactly.
We're going to do this essentially with every single stop in this. First, let's start with your pose.
Yep.
This is what's super cool. We built a model in collaboration with Google DeepMind that can track you spatially, and it creates a three-dimensional, essentially, pose of you from a flat two-dimensional video.
That's awesome.
I mean, I love it. I can't even see you here. How does it do it?
This was my tight pants, leather jacket phase, so you got to really punch in on it.
I know. Well, we should have one of me.
Yeah.
I got some out back, too.
It was a good look.
I know. It's still a good look.
Okay.
All right. Let's talk about the next thing, right?
Yeah.
These are stats powered by Gemini.
Mm-hmm.
Here we're tracking your flight dynamics.
Yeah.
Rotational velocity, and even your tuck compression.
Yeah. This is amazing. We didn't have this before. I can see how much time I've spent in the air.
Yeah.
Now take old footage that I've done and compare it with new footage of how much time was I in the air when I landed the trick and when I didn't land the trick, and I can compare the two, and that data's really going to help me progress, and obviously the next generation can use these tools, too.
It's true. What I love is I love seeing how quickly your rotational velocity gets up here. It's all about how you exit the pipe, right?
Torque, exactly, and coming out of it.
All right. Let's look at one more cool thing that we've done here, which is the ribbon overlay.
Oh, I love this.
So, here's what I love about this.
Mm-hmm.
We're actually showing your work here with the whole visualization. What happens here?
Yeah.
We're moving from blue to green on the thing.
Yeah.
What is that?
When I was working with the engineers, I was trying to describe this trick, and there's a turning point where normally I do one front flip and then come in for the second, but then you have to pause and then go back flip.
Yeah.
If you miss that turning point in the trick, well, it's not good. It's not good. This really pinpoints it exactly.
It is so cool to watch this. What I love is that this is making sports more accessible for fans, right? We can see all of the craziness that you're putting into the sport.
Exactly.
All right. Let's bring this back and talk a little bit more tech.
Yeah.
Google Cloud is helping everyone innovate here by creating new use case specific models, even for things like spatial analysis, training these large amounts of data and even models with Google Cloud TPUs.
Yep.
And helping everyone to build a secure data and agent foundation with Gemini Enterprise Agent Platform. Compute, models, and platform. This is the power of Google Cloud AI.
Look, from my personal experience, I can tell you, learning a trick on the mountain is one thing, but actually understanding the physics of a trick is a whole other thing. And this is really going to help, not only the next generation of athletes to learn new skills, but also help the fans at home understand the sport better. And not just for snowboarding, but I think sports globally.
Yeah.
Look, I'm so stoked to see what's ahead. Thank you, Jason, and thank you Google Cloud Next.
Please welcome SVP and Chief Technologist, AI and Infrastructure, Amin Vahdat.
Thank you, Shaun. We're taking the exact same technology that empowers athletes and applying it to the enterprise. Because while athletes use data to understand the game, the enterprise needs it to change the game. That kind of scale is the ultimate stress test for a unified system. When you process massive video feeds, reason over decades of historical data, and defend every byte in near real-time, you have a foundation built for anything. To solve this at a global scale, we first have to solve a physics problem. It is a constant struggle against the limits of software architecture, hardware architecture, power, cooling, and even the speed of light. This is the foundation of our AI Hypercomputer. We integrate clean energy, massive physical scale, and our purpose-built infrastructure into a single unified engine of efficiency and innovation.
Because in the agentic era, compute is no longer defined by chip, compute is the entire data center. We give you the ultimate flexibility to pick the right architecture without compromise. You can see this in how different industry leaders are innovating on Google Cloud. Axia Energia, Brazil's largest power company, helps prevent power outages for millions of customers by running advanced weather modeling on TPU clusters. By forecasting severe weather conditions up to 10 days in advance, the company can proactively plan and execute preventative measures before storms hit, significantly reducing service interruptions year-over-year. Thinking Machine Labs chose our NVIDIA-based infrastructure to power Tinker, an open platform for reinforcement learning and fine-tuning of frontier models for specialized use cases. Woven by Toyota achieved 42% faster training for models that predict complex traffic events.
The US Department of Energy is powering AI co-scientists across all 17 national labs to accelerate the pace of scientific discovery. And Boston Dynamics is training and safely deploying flexible vision language models to scale robotics across diverse industrial applications. As we enter the agentic era, we see that the demands of training and serving have completely diverged. To meet the performance needs of these explosive workloads, I am proud to announce our eighth-generation TPUs. These are things of beauty. For the very first time, we're launching two specialized platforms, each built from the ground up for the distinct demands of training and serving. TPU 8t is a powerhouse optimized for training. We have redefined performance capability by moving block scale multiplication directly inside the MXUs. This native MXU quantization eliminates VPU overhead, delivering nearly 3x the compute performance per pod over previous generations.
This allows us to push the absolute limits of Model FLOPs Utilization at a massive scale and reduces the training time of frontier models. It leverages a breakthrough inter-chip interconnect technology, which now delivers twice the bandwidth compared to Ironwood, scaling up to 9,600 TPUs connected via our 3D Torus topology. This is a 2.8x improvement over Ironwood to deliver 121 exaflops of FP4 compute per pod. 8t provides 2 PB of shared bandwidth memory in a single Superpod and utilizes the new TPU direct storage to enable high-speed data transfers for managed storage. To put that scale in perspective, 2 PB is enough to hold the entire digital collections of the Library of Congress 100x over. TPU 8i is optimized for the thinking part of the equation, inference and reinforcement learning.
At the chip level, we have integrated a specialized collectives acceleration engine, which reduces latency by an additional 5x. By hosting the memory cache entirely on silicon, we've finally broken the memory wall that slows down long context decoding. We then scale this on-chip performance to our new Boardfly topology, which deploys 1,152 TPUs into a single pod to run millions of concurrent agents with near zero latency. This delivers 11.6 FP8 exaflops for a substantial 9.8x increase in performance over the 256-chip Ironwood pod. We are extending our infrastructure leadership to general purpose workloads as well with Google Cloud Axion. Our Google Axion N4A compute instances, powered by a custom-designed Arm CPU, deliver up to twice better price performance and 80% better performance per watt than comparable x86 instances.
This provides a sustained continuous operation designed to eliminate cold starts and logic gaps, ensuring your agents are always on and ready to respond. We are the preferred cloud for NVIDIA GPUs by some of the largest scale and most innovative customers in the world. Today, I am pleased to announce that Google Cloud will be among the first to offer the NVIDIA Vera Rubin NVL72. Vera Rubin NVL72 on Google Cloud achieves 10x performance efficiency, optimized for high interactivity and long context workloads. Of course, silicon is only half the battle. To feed these chips, we've redesigned our entire storage and networking pipeline. I am pleased to announce that Managed Lustre now supports throughput up to an industry-leading 10 TB per second. We tie it all together with our new Virgo Network. Virgo doubles connectivity to scale training beyond the Superpod.
It links 134,000 chips with up to 47 Pb per second of non-blocking bandwidth to deliver 1.7 million exaflops of compute. We can now turn months of training into weeks with the power of a million plus TPU chips in a single cluster, all orchestrated by Pathways and JAX. With up to 4x the bandwidth per accelerator over the previous generation, this combination delivers near-linear scaling, ensuring you get the full power of every chip you pay for. We're also making Virgo available on NVIDIA Vera Rubin NVL72, supporting up to 960,000 GPUs. Leading financial institutions are using this exact combination of silicon networking to gain a massive edge. Let's hear from Citadel Securities on how we are innovating together.
[Presentation]
Citadel Securities is just one example of many innovators who are gaining a competitive edge on our foundational AI Hypercomputer. At this scale, you cannot have humans manually troubleshooting configurations. You need a cloud that drives itself. We have used the Model Context Protocol to turn every Google Cloud service into a tool that agents can orchestrate directly. By integrating Gemini Reasoning into our own telemetry, the system now performs autonomous root cause analysis, identifying and fixing misconfigurations before you even realize there's a problem. This is the AI Hypercomputer. Purpose-built infrastructure to accelerate the entire AI life cycle. To show you how we feed this engine with your data, please welcome Karthik Narain.
Thank you, Amin. Amin just showed you how we built the world's most powerful reasoning engine. But here's the reality, reasoning without context is just a guess. When you expect your AI to make decisions and your agents to take actions, you cannot afford to guess. Trusted context turns an intelligent guess to a decisive action. Today, we are completely rethinking the data platform. We call it the Agentic Data Cloud. To make this real, I'm thrilled to share four foundational innovations. It all starts with the foundation of truth. Right now, your data is trapped in PDFs, video calls, multiple data stores, and SaaS applications. How do we help AI make sense of it all? Today, we are introducing the Knowledge Catalog, a universal context engine for your enterprise.
Starting with your structured world, the Knowledge Catalog natively integrates with BigQuery, mapping tables and metadata into unified business logic. We are going even further, extending the same capability to the unstructured world as well through Smart Storage. In the past, a file landing in your storage sat passive, waiting for a pipeline. Now, the second that image or PDF hits Google Cloud Storage, they are instantly tagged, enriched, and made agent-ready. Zero manual data engineering. Powered natively by Gemini, the Knowledge Catalog goes even deeper, reading files, autonomously extracting entities, mapping relationships, and learning about your unique business semantics. When an agent hears net revenue or risk, it understands the exact meaning. Take Virgin Media O2. They had over 20,000 data assets, many of which were untapped. Knowledge Catalog is helping them activate this data to empower their global product teams.
Now, combined with direct zero-copy access to applications, operating systems, and AI platforms like Palantir, Salesforce, SAP, ServiceNow, and Workday, you now have instant context to your entire business. That's right. This trusted context enables agents to act with certainty. Now, imagine this power in the hands of every employee, every day. We've integrated the Knowledge Catalog with Gemini Enterprise's Deep Research Agent. You can now enable multi-step reasoning across internal and web data for precise cited business answers. What took weeks of manual effort now happens seamlessly. This is the Knowledge Catalog. Second, we are unveiling a Gemini-powered data science authoring experience that fundamentally transforms how data practitioners work with our new Data Agent Kit. We didn't just want to build another platform you have to learn. Instead, the Data Agent Kit integrates comprehensively with libraries of AI skills and plugins directly into your workflows that you already use.
Your IDE, your notebook, your terminal. Whether you are in VS Code, Cloud Code, or Gemini CLI, we instantly turn your everyday workspace into a native data environment. Here's how it works. You just state your intent, predict customer churn, and your environment simply takes over. It autonomously builds the pipelines and deploys the models right on Google's Agentic Data Cloud. It handles all the complex orchestration for you, creating a direct path from your business objective straight to the business outcome. For researchers at Bayer Crop Science, this eliminates manual data management and analytics, freeing them to focus on pioneering agricultural innovations for a sustainable and productive future. Third, the demand on data is extreme in this agentic era. To address the scale challenge, we're unleashing the Lightning Engine for Apache Spark.
It not only delivers industry-leading performance for Spark. Lightning Engine delivers up to 2x the price-performance over the previous market leader. We fundamentally rebuilt this engine for the agentic era. Reasoning with trusted context enables autonomous agents, and with the Lightning Engine, that autonomy operates at massive scale. Industry-leading organizations like Flipkart, Lowe's, and Meesho are accelerating their Apache Spark workloads with Lightning Engine. Finally, let's address the biggest and the longest-running debate of all. Where should your data reside? The reality is it lives everywhere, at Google, at AWS, Azure, and across your SaaS applications. Your old lakehouse expected the analytical engines and the data storage to reside in the same cloud. This approach is broken. Today, we are introducing the Cross-Cloud Lakehouse, where your analytical engine reasons over data in any cloud. Built on the open Apache Iceberg standard, it is completely borderless.
Instead of forcing you to accept complex networking hurdles or massive egress fees, we deliver low latency direct connectivity to AWS and Azure as if the data sat natively in Google Cloud. No more moving data, no more vendor lock-in, just freedom. No matter where your data lives, from traditional systems to SaaS applications, we provide the trusted context and connectivity your business needs to act upon. This is why customers are choosing our Agentic Data Cloud to drive their agentic transformation. One example of this transformation is Vodafone. Vodafone unified their data on BigQuery to build a more resilient global network for their customers. And with Gemini Enterprise, they are launching hundreds of agents to proactively resolve outages and optimize infrastructure, scaling network reliability and saving millions every year. Macquarie Bank's goal was to securely scale AI across their entire operation to deliver more personalized customer experiences.
They deployed Gemini Enterprise to all staff and supercharged productivity. For their two million customers, Macquarie's 24/7 AI assistant, Q, autonomously answers banking questions. All of this was made possible by unifying their data on BigQuery and Spanner. With this foundation, Macquarie has already cut client losses from scams by half. That's secure, frictionless banking at scale. American Express is redefining trust by centralizing its core data on Google Cloud to enable faster fraud and risk analysis at global scale. Costco leverages BigQuery to accelerate member insights, empowering associates to optimize experience and maximize value for millions of members. This is the Agentic Data Cloud. You and your AI agents now have a platform that operates with universal context, is defined by your defined intent, and scales autonomously across your borderless data and the applications you already use today.
To show you how to bring trusted context to your business, please welcome Yasmeen Ahmad.
Karthik gave us the recipe, so now let's see how it all plates up. Most companies take five weeks to turn a trend into a decision. We're going to do it in five minutes. To get there, we're breaking three barriers: dark data with our Knowledge Catalog, data silos with our Cross-Cloud Lakehouse, and manual code with Google's new agentic tools and skills. So let's get cooking. Our new agentic workflow triggers have detected a tasty trend. Midnight Swirl Froyo. Looks delicious. With any new flavor, I need to know, is it safe? Are there any hidden allergens? Is there a market? Where are my hungry customers? And is it worth it? What's the real ROI? But first, safety. We have thousands of PDF recipes. I'm going to search for soy. It's a top food allergen, and safety is non-negotiable. Zero results. Looks good so far.
But this recipe does contain an ingredient, Base 204. Let's check the supplier manual for Base 204. Here it is. Base 204 actually contains soy. A simple Gemini search would miss this connection because the information is trapped across two separate PDFs. How do we digest all of this data to find those hidden connections? Well, I need an agent that has the skill to work with my PDF data combined with our Knowledge Catalog. Watch. Our agent helps us find hidden allergens in a recipe for Midnight Swirl. Here it is. Midnight Swirl contains soy, and it's giving me a data citation, a schema called product specs. Now, this schema was generated by our Knowledge Catalog working with Gemini to reason over our supplier and recipe PDFs and extract entities like recipe, ingredient, and allergen, and mapping those previously invisible connections between them.
This is trusted context built over dark data. Next, market size. To launch this globally, I need a precision list of customers with zero soy allergies. However, I have two datasets that have never met. Our allergen schema is here in Google, while my loyalty list with millions of customers and dietary preferences is an AWS S3 Iceberg. Now, usually you'd call your data engineer and wait three days for a migration. Not today. No more slow churn. With our Cross-Cloud Lakehouse, the data stays exactly where it is while we dynamically build the list. Before writing a single line of code, Gemini builds a plan using our data engineering tools while respecting my built-in security and permissions. As the data scientist, you're the chef, you can review. Let's accept and see what our agents serve up.
They execute on our Lightning Engine, which is 2x more price-performant than the market proprietary alternative, filtering our soy-sensitive customers in mere seconds. AWS and BigQuery sources connected. Zero complexity. It's as smooth as a batch of Midnight Swirl. Finally, ROI. Is this worth a global launch? An annual forecast would help. Again, our agents don't guess. Gemini builds a multi-step execution plan. Can you make changes? Yes, chef. Let's take a look and change the number of simulations from 1000 to 2000, and we will accept. Acting as the orchestrator, Gemini delegates to our specialized data science tools, choosing the right models, and again, executing on our Lightning Engine for Spark. It's built an entire notebook for me. Now, training usually takes a bit of time, so here's one I ran earlier. $15 million. Even after protecting our soy-sensitive market, the demand is massive.
Can we bring back the timer? Look what we accomplished in less than five minutes. We turned a viral trend into a $15 million decision. First, Knowledge Catalog found the hidden soy connection. Second, Cross-Cloud Lakehouse, powered by the Lightning Engine, connected to AWS with zero complexity. And third, our agentic skills and tools built the forecast with zero manual code. This is the Agentic Data Cloud. We did it. And now, here to share how we're securing your agentic enterprise, please welcome Francis deSouza.
In an agentic enterprise, giving agents the power to act means that security must become an autonomous force, moving faster than the threats themselves. Human analysts simply can't keep up with AI-driven attacks. The mean time to exploit a vulnerability has dropped to -7 days, meaning that today, exploits routinely occur before a patch is released, and the handoff time between initial access and handover to a secondary threat group has dropped from eight hours to 22 seconds. Your security must operate at machine speed. Now, to do that, we're moving the heavy lifting to a Gemini native agentic security operations center or SOC, and the results are already here. Our triage agents are turning 30-minute investigations into 60-second resolutions. Our new threat hunting and detection agents proactively sweep your environments for risks at a scale and speed that no human team could match.
By leveraging Google's unparalleled telemetry, including Mandiant, VirusTotal, and Chrome, we're delivering a defense system based on global intelligence. And now, with our integrated dark web intelligence, we can identify external threats with 98% accuracy. Customers are already seeing big benefits from Google Cloud security offerings. CME Group depends on a unified stack of Mandiant Security Command Center and Google SecOps to protect the world's largest derivatives system against novel threats. In an ultra-high stakes environment, Google's security offering is matched by speed. CME Group achieves nanosecond precision with Google's ultra-low latency solution, supporting billions of transactions daily and lowering the barrier of entry for new traders. Now, the biggest threats today aren't just hackers. It's also shadow AI, unauthorized models and agents that are operating in your enterprise but outside your control.
To meet this challenge, we have integrated the industry's deepest security context directly into our AI fabric. That is why I am thrilled to officially welcome the Wiz team to Google Cloud. Together, we are building a new security posture for the agentic enterprise. While the new agent platform in Gemini Enterprise secures and governs your agents, with Wiz, we're now extending that protection to every asset, on your premises and across all major clouds. To explain how we're solving this, I'm excited to introduce to the stage Wiz co-founder, Yinon Costica.
Thank you, Francis. At Wiz, our mission from day one has been to help customers protect everything they build and run. Wiz began by unifying code and cloud and runtime context to move at developer speed. But AI has fundamentally changed the environment. With autonomous products, hyperscale code generation, and AI weaponized threats, the stakes have never been higher. To protect this new frontier, security must now move at machine speed, and that is exactly why we built Wiz as the first AI application protection platform, or in short, AI-APP . It solves four key security challenges for the AI era, giving security unparalleled visibility into the AI stack, finding and proactively remediating critical risks before attackers can use them, enabling our builders to start secure by design in their AI-enabled IDEs, and lastly, arming SecOps teams to stop threats to cloud and AI environments.
In order to outpace attackers, Wiz delivers a set of expert AI agents. It's a really cool concept. Our Red, Blue, and Green agents are named after the teams they help, Red, Blue, and Green teams. They form an AI layer to autonomously identify, investigate, and then fix critical risks at machine speed, leveraging context from our unique security graph. Security starts with visibility always. But today, your AI footprint is an orbit of interconnected tools. It spans models like Gemini, Claude, OpenAI. Your teams are using agent studios such as Gemini Enterprise Agent Platform, Lovable, Copilot Studio, Salesforce's Agentforce, and to secure all of these environments, we examine every layer, the clouds, the data, the models, the agents, and we continuously correlate these risks to find and fix the attack paths that matter most. Now, let's see it in action. Okay, so this is Wiz.
Wiz basically starts by automatically building a dynamic inventory from your code and cloud completely agentlessly. This means that you can see every visibility into the environment, into everything that your teams are building using AI without any friction. Wiz then builds the security graph. The security graph, think of it as a living map that explains the architecture and the logic of any AI application it sees. Here we see an example of an agent that is actually running on Claude. It has tools to query a database and even execute code. You can see that Wiz actually flags that this agent is Internet exposed, and also it has access to sensitive customer data. Now, this is security moving at the speed of AI. You don't need any reviews with your development teams. It just works.
In the AI era, the best defense is to continuously use AI against ourselves in order to give the defenders the first-mover advantage over the attackers. This is exactly what the Wiz Red Agent does. It validates every single exposure that Wiz identified. You can think of it as a friendly hacker that continuously scans your outside, like an elite red team. Look, this is a finding that the Red Agent actually found an authentication bypass vulnerability. This means that it enables an attacker to gain access to that agent, but also to that sensitive database behind it to exfiltrate sensitive information. Now, it's not a potential risk anymore. This is a validated risk, and this gives our developers the prioritization and the confidence in fixing it immediately.
Now, in the past, triage used to mean very, very long meetings and waiting and spreadsheets, and you don't need that anymore because we have the Wiz Green Agent. The Wiz Green Agent automates the entire triage process from identifying the owner, suggesting the fix, identifying the exact line of code that caused the risk in the first place. All automated, no phone tag, no friction, and from here you have a choice. You can send it to the developers as a PR, or maybe better, send it to a coding agent like CodeMender to execute the final fix at the source automatically. This is how we can move from a validated risk to a verified fix. End-to-end visibility, control, and full confidence.
To summarize, whenever you have an AI team that ships a new product or even someone in finance that deploys a new agent, now within minutes, security is able to, one, identify the agent and its architecture. Two, automatically conduct a security review to find and validate the risks. Three, automatically suggest a fix, send it to the dev teams in their own native tools. Ta-da. This is how AI agents unlock secure innovation with AI at scale. Thank you all. Back to you, Francis.
The unique combination of Google's Agentic SOC and Wiz's deep cloud context gives customers the confidence to build and deploy generative AI everywhere safely. We're helping the Los Angeles Department of Water and Power secure critical infrastructure ahead of the L.A. 2028 games, while Singapore's CSIRT enables proactive defense against advanced digital threats. DBS strengthens security by embedding Google Cloud's protection directly into their architecture, enabling real-time threat detection and response, reinforcing customer trust. Morgan Stanley chose Wiz as a key component of its cloud security strategy and is expanding visibility and control across its cloud environments. Now, as part of this strategy, the firm is deploying Google Cloud security capabilities to support its evolving cloud platforms. Trusted global icons from Nestlé to LVMH, from BMW to Shell, rely on Wiz and Google Cloud security to ensure their AI-driven future is secure by design.
We have closed the loop between the defender and the builder. Google and Wiz provide a unified Agentic Defense that is secure by design and autonomous by nature. Together, we are redefining cybersecurity for the AI era so security teams can protect their organizations at machine speed. To share more about how our agents are helping your customers, please welcome Carrie Tharp.
Thank you, Francis. A secure foundation gives you the confidence to innovate. At Google, we are using that foundation to solve two of your biggest challenges, serving your customers and helping your employees work smarter. We do this by deploying a coordinated Agentic Taskforce, specialized agents that don't just assist your team, they operate alongside them. In the agentic era, an agent is no longer just a tool. It's a strategic extension of your business, built to expand your reach, deepen engagement, and personalize service at scale. Earlier this year, we launched Gemini Enterprise for Customer Experience, a suite designed to enhance the entire customer journey from the first moment of discovery to ongoing service interactions. Central to this is agentic commerce. We have introduced pre-built shopping and food ordering agents that handle everything from discovery to checkout entirely through natural language.
By pairing these agents with our customer engagement agents, we've created a seamless journey from search to service, turning every interaction into revenue and retention opportunities. For example, with the food ordering agent, Papa Johns is building a hyper-personalized system that remembers customer preferences to get the food to your door faster. During live interactions, Agent Assist coaches employees to deliver fast and more accurate answers to customer questions. For example, Best Buy is guiding shoppers through those complex tech specs, issue resolution, and appointment scheduling, all autonomously. Our new omni-channel gateway helps ensure your agents maintain context across every surface, web, mobile, voice, while preserving universal consumer context. If a customer moves from a text chat to a phone call, the agent seamlessly remembers exactly where they left off, and this scales globally, conversing with human-like voice capabilities in multiple languages.
Gemini Enterprise for Customer Experience is the technology powering The Home Depot's Magic Apron assistant to reach customers across every channel. Let's hear from The Home Depot.
[Presentation]
This same agentic approach is also reshaping digital-first industries. One example of this is Reliance. Reliance is transforming retail for millions in India with a shopping agent. Customers can simply type, "Plan a birthday party," and the agent guides their journey across categories and assembles their cart, which drives more revenue per visit. Gemini-powered catalog enrichment supports this, analyzing millions of images in minutes instead of months. Gemini Enterprise for Customer Experience is hard at work across many industries and enterprises. Internally, teams at Google, including Google Store, Google Photos, Google One, and YouTube, are partnering with us to power our own scaled customer experiences. Let's take a look at how one of our first internal customers is doing this. Over to you, Patrick.
All right. Thanks, Carrie. Hey, y'all. I'm Patrick Marlow, a senior product manager here at Google Cloud. YouTube TV Customer Support recently launched a voice agent to help users with NFL Sunday Ticket and YouTube TV plan subscriptions. It's currently live in production, serving 100% of their users. We're going to take a look at it in action today. I'm going to call the support number now.
Thanks for calling the YouTube TV and NFL Sunday Ticket support line. So tell me, how can I help you today?
Hey. Yeah, I've got a bunch of my friends coming over tomorrow night for the NFL draft. I don't have cable, and to be honest, I really only watch sports. Does YouTube TV have a plan that's just for that?
Yes, we do. The YouTube TV sports plan has exactly what you're looking for, plus over 30 other sports channels. It's $18 less per month than the base plan. Would you like me to text you a link to the number you're calling from that will take you directly.
Yeah, go ahead and send me a link. I'll take a look at it.
Great. I just sent the link to your mobile phone. Is there anything else I can help with today?
Yeah, I'm going to take a look at this now. Actually, my father-in-law is hanging out with me here today. He's been thinking about getting the sports plan for his account as well. I just wanted to see what he thinks about this. [Non-English content ] Actually, could you just quickly summarize this plan for him in Spanish? I think he just wants to know that he can get both football [Non-English content]
[Non-English content]
[Non-English content ] Look, one other question. I was planning on setting up multiple TVs throughout the house, like one in the living room, one over on the patio. Would that be okay with this plan?
Definitely. With the sports plan, you can stream on up to three screens at the same time. Is there anything else?
Awesome. No. Look, I'm actually in the middle of a stadium full of people right now, and I think there's millions watching me on the live stream, so I kind of got to run. All right, you've been helpful. Later.
All right. Now, that was pretty easy. The agent was able to handle complex product logic and seamlessly pivoted languages. What happens if the team wants to make a change? They don't need a bunch of software engineers updating endless lines of code. They just need CX Agent Studio. Here, we have our visual builder inside of CX Agent Studio, which is designed to give you complete transparency and control over your entire agent-building experience. This is the behind-the-scenes look at the actual YouTube TV's agent that they put in production. The orchestration is managed across multiple specialized sub-agents, which can handle even the most complex requestsAnd t
And, with our built-in test interface, you can ensure that every answer is grounded, factual, and pulled directly from a knowledge base, like the agent's price finder tool here. And the best part? YouTube TV customer support built and deployed this entire experience in just six weeks. Now, let's say that the team wants to run a promotion. It's really simple for me to add a new sub-agent here. Click Add a new sub-agent, and I will make this our new promotions agent. Say, promotions. I'll hit Save, and then I'm going to quickly add a set of business logic via instructions in natural language to our agent. I'll hit Create, and just like that, our multi-agent system has instantly adapted to this new specialized agent. It's really that simple.
Finally, with conversational insights, the YouTube TV team can instantly see how these calls are performing all within the same platform. Here's one of my test calls from earlier this morning. Now, let's recap. We just turned a routine support call into a multilingual upgrade and a globally deployed promotion all in under four minutes. This is the power of Gemini Enterprise for Customer Experience. It gives you enterprise-grade reasoning, deep tool integration, a powerful visual builder, and insights to iterate at the speed of your business. We can't wait to see what you will build next. Thank you.
Please welcome VP of Product, Google Workspace, Yulie Kwon Kim.
Google Workspace is the world's most popular productivity tool. Even with the best tools at our fingertips, work can still feel incredibly fragmented. We've all been there. You're trying to answer one simple question, and 10 minutes later, you have 15 tabs open. You're jumping from a stale email to a slide deck that's being edited as you watch, and digging through a spreadsheet just to find a quick answer. We all spend half our day finding information and the other half figuring out what to do with it. What if you could skip both? Today, we're introducing Workspace Intelligence. With the advanced reasoning capabilities of Gemini and state-of-the-art embedding models, we're eliminating context fragmentation across the Workspace suite. Think of it as a unified intelligence layer that lives inside every Workspace app. It connects the dots and lets AI do the heavy lifting. Let's see it in action.
So, remember that furniture rebrand you saw earlier? Let's say I'm the regional distributor, and here I am in Google Chat. This is where I collaborate with my colleagues and now my agents all in one place. I'm here in the regional manager's chat space. And look at this, the regional operations agent, which we built in Gemini Enterprise, just alerted the team that our new display kits are arriving early. And as you can see, I have multiple chats blowing up. What do I need to do next? With Ask Gemini, Google Chat becomes my command center. Look here. Gemini tells me exactly what matters right now. See? It surfaced an urgent task, it linked the pitch deck that I need to localize, and it flagged my 4:00 P.M. deadline for the regional plan. The gathering is done, and I haven't opened a single extra tab.
Now, to build this plan, I remember we had a great chart that showed regional sales last quarter somewhere. Normally, this is where the hunt begins. I jump into a folder and stare at dozens of files. Instead, I'll ask Gemini, "Find the merchandising playbook from last quarter, the one with the chart showing regional sales." This isn't just keyword matching. Workspace Intelligence understands the context of my meetings and the content inside my files. Here it is. It pointed me directly to the doc with the exact graph I need. A needle in a haystack solved in seconds. Now, I need to turn this scattered data into a regional plan. I have a standard format for briefing store managers, and with Ask Gemini, I basically say, "Just do it for me." Watch this. Let's see. "Use the regional campaign skill to..." Oops.
"Create a deck with a plan for organic living." By tagging that specific skill, Workspace Intelligence goes to work. It cross-references my emails, my chats, my other docs. It pulls live HubSpot win-loss data. It uses our corporate branding, all to generate a new Google Slides deck. Yeah, it's hard at work pulling that deck together, and here it is. In the citations up here, you can see all the sources Workspace Intelligence pulled in without me having to ask. The emails, the chats, the other files. Should we take a look? Yeah. Look at this. It's beautiful. It matches my brand, and it's consistent with how I've structured all of my regional plan decks in the past. It looks like it was designed by a creative pro, not a terminal. I'm ready to brief my team. Workspace Intelligence is the end of the context tax.
It transforms how you work by turning fragmented information into a clear path forward. No complicated setup. It's secure. It's integrated. It just works. By moving from simple automation to full-scale AI orchestration, industry leaders are using Workspace to fundamentally redefine global productivity. For example, customers like Colgate-Palmolive rolled out Google Workspace to 34,000 employees. AI agents built on Gemini help their teams drive innovation, turning data into new product concepts in minutes, not months. They drive business growth by unlocking actionable insights from decades of sales history, empowering their teams to help brighten millions of smiles every morning. At Natura, custom agents are accelerating data-driven reporting by 10x, while at Korean Air, over 22,000 global employees use AI agents and tools to accelerate high-impact operational and care tasks. And at Compass Real Estate, Gemini is managing tasks for employees, so they can focus on valuable client relationships.
We know many companies want to make the shift to Workspace but worry about the complexity of migration. I'm thrilled to announce that migrating your entire organization, including complex and finance teams, from Microsoft 365 to Google Workspace, is now up to 5x faster, thanks to our new migration and interoperability enhancements. Today, you've seen what it looks like to fundamentally change how your employees operate. Go ahead, close those 15 tabs. Workspace Intelligence is now ready to do the heavy lifting for you. Thomas, over to you.
Wasn't that amazing? Thank you, Yulie. Before we close, let's take a look at one of the most respected brands in the world. Serving 3.7 billion people around the world, Unilever chose Google Cloud to build and deploy agents at scale. Let's hear their story.
[Presentation]
There's one final difference in how we work with you. We believe the future of AI must be open. While others want to lock you into a walled garden that owns your models, your data, and your agents, we offer you an integrated stack, but the freedom to choose the world's best chips and models, the freedom to run AI wherever your data may live, the freedom to control your own destiny with deep governance features. We scale this mission through our partners with whom we're building a broad and deep network of forward-deployed engineers, including Accenture, BCG, Deloitte, and McKinsey, who've announced major expansion of their Google Gemini AI practices. Along with AI-led service partners like Quantium, Distyl, and Tribe.ai, we're helping independent software vendors and SaaS companies transform their solutions with Gemini Enterprise Agent Platform.
And we're bringing AI to small to medium-sized businesses by helping them adopt Gemini Enterprise and our AI advances in Workspace that work so seamlessly together now. We've covered a lot of ground today. We've introduced our agentic blueprint as a foundation for agents to transform your companies. First, the AI Hypercomputer, the purpose-built foundation optimized for the physics of the agentic era. Second, the Agentic Data Cloud, the engine that provides agents with trusted business context. Agentic Defense, the autonomous protection that secures your entire AI life cycle. Fourth, the Gemini Enterprise Agent Platform, a new agentic platform with state-of-the-art models to build, deploy, and manage agents. Finally, the Agentic Taskforce, the specialized agents that are already transforming customer experience and employee productivity, and transforming workflows all served up to users through the Gemini Enterprise application.
To each of our customers and partners, thank you, each of you, for being on this amazing journey with us. Our teams have worked so hard this last year to prepare this amazing technology for you, and we're all so proud of the work they've done. This platform is ready, so what will each of you build? We have three fantastic days planned for you and a great set of sessions and spotlights still to come. Thank you and have an amazing Google Cloud Next.