Welcome, welcome people. It's great to be here today, and I always look forward to this event. It's a chance to engage with family. A special shout out to those VMUG members here today. We love you, guys. Last year I was here, we talked about private cloud being the future of the enterprise. Twelve months later, the future is here, and we have the data to back that up. A global survey of IT professionals earlier this year made it clear what you, your top priority, want. Seven out of ten of you plan to come back on-prem. You want to invest in private cloud. Here's the challenge. We always have this. VMware, you know that, innovated for years, but they never truly integrated the building blocks of a cloud. This is just a sampling of what we've heard from you. Now, here's the good news.
Since the acquisition two years ago, we rolled up our sleeves, did the tough engineering work, and the result today is VMware Cloud Foundation 9.0, a real software-defined platform to run all your application workloads with complete compute, networking, and storage tightly integrated. This is what you asked for. We deliver VCF as a single SKU. We made it plug and play. With VCF 9.0, I just want you to know private cloud now outperforms public cloud. It has better security, better cost management, and of course, greater control. The technology works. It works like a dream, of course, with Broadcom and VMware. Deploying into your organization may not be that easy. I run an organization myself. I get it. I feel your pain. Let's talk about the three points of friction we see here. All right? Developers. Developers don't want to think about infrastructure.
Modern apps run on containers, and developers just want to write code using their favorite DevOps tools. Meanwhile, your job is to put guardrails in place. With VCF 9.0, we run containers as seamlessly as we run virtual machines. You no longer need two separate different platforms. We're giving you infrastructure at the speed of the developer. It's not, but it's not just developers you need to work with. Look at your own team. You have networking, you have storage, you have compute, you have security. Each of these teams speaks their own language. Once again, this unified platform changes all that. With VCF 9.0, you break down the silos in your organization. This is a platform that embraces IT, developers, and unites compute, networking, and storage. Here's the payoff. The big one. It's an accelerated path to production for your apps today.
This is more than just nice to have. It's critical for all of us. Moving on, next issue. Your job is to secure the business against multiple threats, both physical and virtual. At the same time, all you face is pressure to move quickly. How often do you hear this? Yeah, we care about security. Just don't let it slow things down. VCF allows you to balance these two priorities. We have a broad set of security solutions integrated into VCF 9.0. You no longer need a bunch of agents, additional security tools. It's all built in. That brings me to the final, probably most important point of friction. Most of you continue to be weighed down by your legacy infrastructure, and you're afraid to move forward. How do you let go of your IT past so you can build for the future?
I can tell you for sure the answer is not to run straight to public cloud as you did five, ten years ago. If you're going to do cloud, do it right. Embrace VCF 9.0 and stay on-prem. VCF 9.0 is the culmination of 25 years of VMware technology and innovation. This is the platform for the future. We want for you, we want to give you the best cloud platform in the world. I mean that. We want to make you a hero, the person who drives this huge impact in your organization. It's one thing to hear me talk about private cloud and VCF 9.0. It's even better to hear from a customer. Now, let's take a look.
I'm Stephen Flaherty. I head up the Chief Technology Office and Global Head of Infrastructure here at Barclays. I have the privilege of delivering the infrastructure components that underpin all of our key systems across the firm. Private cloud and Barclays are at the core of all of our business systems. It's literally the lifeblood of all technology across the firm. VMware Cloud Foundation underpins our Barclays virtual platform. It's as fundamental as that. With the recent push on generative AI, we're finding even more use cases to apply it to. That means from an infrastructure perspective, we need to think about our private cloud capabilities to host AI workloads as well as public going forward. Our unified cloud platform underpinned by VCF will allow us to run those AI workloads and those AI models on-prem in the future.
Right now, we're very bullish about what we'll be able to achieve next year as we roll out VCF 9.0 to the organization. Developer velocity is at the core of Barclays' technology strategy. Us being able to provide environments on demand to tear up and tear down our environments, to expand those environments, and more than anything else, to bake that into the pipelines and the way our engineers work is absolutely vital. Our people have evolved to become poly-skilled, where the network engineers now understand storage and compute. The importance of our people evolution in lockstep with these platform constructs is vital. VCF 9.0 will be the foundation that joins together and creates our unified cloud experience. This will be probably the first time where we expect our engineers to have a public cloud-like experience on-prem.
We'll have a level of elasticity and expansion in this platform that will rival anything that we're doing in public clouds. More than that, it will be about the experience. We'll have the embedded security, the compliance, the control aspect, and the simplicity and standardization will present us with more speed and stabilization than we've ever had before.
Please welcome Paul Turner.
Wow. What a great audience. You know, it's kind of fun actually being back here in Vegas. More importantly, what a story from Barclays. For them, VCF isn't just infrastructure. It's powering their business. It's delivering a secure private cloud for all their applications, including AI. They have transformed IT. VCF isn't following that transformation. It is leading it. For years, one thing's been true, vSphere is everywhere. You know, it changed how business gets done. It allowed us to standardize and automate IT operations. Virtualization was just the start. The next chapter is Private cloud. Delivering secure, trusted applications today requires governance of networking, storage, backup, DR, security policies. These define the application perimeter and the reliability, security, and availability of that application. VMware Cloud Foundation, VCF, delivers on that full promise of private cloud. Automated, secure, and ready for all applications. VMs? Yes. Containers? Yes. AI? Yes.
This will power the next generation of the data center and deliver an agile, secure, private cloud environment that can be delivered anywhere. On-premises, edge, hyperscalers, every hyperscaler that you know deploys us, and of course, all our service providers and CSPs. It's IT moving at the speed of developers, providing the controls, security, and trust the business requires. From global banks to healthcare, governments, and defense industries, the most critical services that you know run on VCF. Nine out of the top 10 Fortune companies have committed to VCF. It's trusted by 95% of top manufacturers, 90% of the public sector, 85% of financial services. Even the biggest technology companies that you know, 90% of them are committed to VCF. Healthcare organizations. Today, I'm really, really pleased to announce that Walmart, the number one Fortune company in the world, is committed also to VCF.
They have selected Broadcom as their strategic vendor for virtualization. What we are doing there is we're helping to unify all of Walmart's global distributed operations with VCF. You're not alone. Everyone's moving this way. A year ago today, I was on this stage, and we announced VCF 9.0. We delivered on that promise. Over 1 million hours of engineering, 5,000 engineers, 8,000 patents underpin that technology. VCF 9 delivered, available now, GA. GA earlier this year. Don't just take my word for it. Our vExperts are kind of like the leading technologists. They're like the gurus of virtualization. Why don't we hear what they have to say?
We all know the drill with patching. Schedule your maintenance windows, migrate your workloads, reboots, and pray that nothing goes sideways. It's really time-consuming when you need to repeat this a couple hundred times across your environment.
This is why I'm super excited for live patching and VCF 9.0. A true game changer. Patch your ESXi host with zero reboots, zero migrations, and zero downtime. Your weekends are now back. When a new patch drops, head over to VCF Operations, your single place to download, schedule, and deploy. Fast, simple, and secure.
We all know threats are getting worse every day. Your perimeter? It's not enough anymore. Ransomware will find your data and lock it, steal it, or leak it online. That's why I'm excited about VCF 9.0's SecOps Dashboard. I can stay one step ahead. Data at rest encryption is built in, so your data is protected before hackers even get close. Plus, it watches who's accessing what and flags anything suspicious. Brilliant.
Before, we were constantly console hopping. One for capacity, another for certificates. Health checks elsewhere. It was scattered, manual, and easy to miss critical stuff. Now VCF Health and Diagnostics is built right into VCF Operations. One view for your entire private cloud stack. Seamlessly monitor health, track capacity, and catch expiring certificates early, all in one place. Finally.
One of the biggest storage challenges we used to have was efficiency, especially with duplicate data spread across multiple disk groups and hosts. Now, with VCF 9.0, we finally introduced global deduplication, which scans the entire cluster for duplicate data without actually interfering with the virtual machines because it runs post-process, so there's no impact from a performance perspective. It has made storage planning simpler, it reduces cost, and it helps getting administrators more out of the hardware they already own. On top of that, with memory tiering in vSphere 9.0, we're also cutting compute costs by offloading cold memory pages to high-speed storage.
Creating a virtual private cloud can be complicated, requiring NSX skills, network expertise, and multiple teams. Even in vSphere, it was manual and time-consuming. With VCF 9.0, that's changed. VPCs can now be created in under 30 seconds right from the vCenter. No handoffs, no network knowledge, just simple, intuitive workflows with built-in multi-tenancy. Fast, easy, efficient. Even my mom can do it.
What do you think about that? Come on. You know, it's actually great to hear from the vExperts , but you know, they missed on my favorite feature. I think they, I wouldn't say they got it wrong. They had some cool features, but my favorite is native Kubernetes built into the VCF stack. You know, that looks a heck of a lot better, right? Anyway, aside from that, you hear from us, you hear from vExperts . It's really important that we hear from customers like you. Please welcome on stage Jeremy Wright, Director of IT Infrastructure at Grinnell Mutual. Welcome, Jeremy!
Good morning. My name is Jeremy Wright, and I am the Director of IT Infrastructure at Grinnell Mutual. We are a 116-year-old insurance company tucked away in small-town Iowa. Just like Grinnell, I consider my team to be small but mighty. We're just 17 dedicated individuals supporting mission-critical workloads for our customers, agents, and mutuals across 19 states. Let me take you back to last year's Explorer. I was out here, just like many of you, and I was wondering, what is Broadcom going to do with VMware? Is VCF 9.0 going to be fit for a small outfit like mine? I had followed the acquisition very closely. I had heard what people were saying about pricing. I came here looking for answers.
As I listened to sessions on VCF 9.0's tighter integration, I really started to think, you know, this looks like it's a good fit for really big companies, but is it powerful enough and fit for a team like mine? That doubt lingered. I went to a pivotal vSAN session, and it was a session on the financials of vSAN. At the end of that session, I walked up to John Nicholson, and I asked him a question. I said, how do you get someone like me, who's very, very comfortable with discrete storage, and how we use that across a metro storage cluster to go to something like vSAN? His answer was just one sentence, but it gave me a very good perspective that I used that night to go over the math.
Eventually, I figured out that going vSAN over discrete storage could save me up to million dollars on my next renewal. I got really excited. I started going into these vExperts blogs, like William Lam's, getting on the subreddits, listening to other customers. I started to look at global deduplication and site maintenance mode for stretch clusters like ours. VCF started to feel like it's not going to be too big for us and that it's just going to perfectly scale to my small team. Convincing the C-suite is never automatic. Our hardware wasn't up for renewal until March of 2026. We had SQL licensing coming up, Microsoft renewals overlapping with this change.
I drew on years of budget wrangling to craft a five-year plan about how we were going to slash our lease spending and optimize our Microsoft renewals and deliver creative lease timing or creative timing on these payments. I presented it all to the CFO and the COO. Once I made that case crystal clear about how we would manage that transition period, they were on board. What sealed it for us and what is continuing to seal it for us is how VCF is transforming our infrastructure from the ground up. Grinnell is over a century old. Our IT has been layered on decade after decade. It can be really hard to make change in an environment like that. VCF is helping us do that. It is not just software. It is really a unifier for us.
For the first time, my network systems, DevOps, DBA, desktop automation, and telecom teams are working very closely together in the same platform. We also have security at Grinnell. Whether you are protecting 100 people or 10,000 people, a lot of the tooling is the same. Our small security team has a lot of ground to cover. VCF is turning out to be a force multiplier for our small security team as well. VCF is really this hub where we are all coming to work together. We are building this shared understanding of what we are all doing. We are all in the same meetings. We understand the why behind how is this architected and why did we do this. On a platform like AWS, I would need to hire more people. We would need to get more people in.
We would have to build additional skills, and we would end up creating additional silos because of that. With VCF, it is really allowing my small team to extract maximum value from a really cohesive set of tools. That unifying software extends to that pain point that exists between infrastructure and developers. Our developers were really struggling with VDI that we had built for them to run their IntelliJ IDEA. We launched what I call Operation Monday. It is all about giving time back to the developer. We are going to containerize our IDE, and then we are going to deliver that to them using VMware Kubernetes Service and VCF Automation for the self-services piece.
It is a really exciting crossover engineering experience where we do not have to talk to them or teach them about the VDI performance intricacies, and they do not have to school us on their developer tools. What is this all going towards? What is it all laddering up to? It is really allowing us to capture the full power of private cloud. I talked about needing to hire more people to do this in AWS. There is this misconception that public cloud is always going to equal fewer people. I just couldn't make that make sense. I would definitely need more headcount with additional skills. The story I'm telling and the story you should be telling is that VCF builds you a private cloud with all of the features: self-service, scalability, data sovereignty, without the public cloud headaches.
Stop thinking about it as servers and infrastructure and start thinking and talking about it as private cloud, period. That $1 million savings that I was talking about, that's really just the start. VCF is going to change every single one of the renewal conversations that we have in the next 3-5 years. It's really exciting because all I see is possibility for saving money. I've used VMware products since the 2.x days, and I feel like VMware has made me two promises. The first one is that we're going to let you virtualize your workloads and extract maximum value from the hardware that you already own. They've been delivering on that promise for a really, really long time. The second one is that we're going to enable customers of any size to run a full-stack private cloud on-prem.
For the first time, I really feel like VMware and Broadcom are delivering on that promise. If my small team can embrace VCF, certainly so can you. If a small team like mine can have these big ideas and do big things inside of VCF, so can you. Dive into the sessions, talk to your reps. I really think that your aha moment is out there waiting for you at Explorer this year. I want to say thank you to my Grinnell Mutual Infrastructure team. Without them, none of this is possible. I want to say thank you to 27 Virtual for being an amazing partner during our implementation. Thank you to Broadcom for finally delivering a platform that lets our small shop punch above its weight.
Thank you, Jeremy, and thank you for your whole team. It's amazing when you hear a powerful story like that from Grinnell Mutual. They modernized, they had faster delivery, stronger security, their small team was able to scale, and simpler operations. Whether it's small, mid-sized businesses, large, or full strategic businesses, big government industries, all of them can be powered by VCF. I changed shirt. Why? We're not talking about just what we've done, a nine. You can get all that stuff now. We're talking about the next generation of VCF. I'm really pleased to talk about kind of three big areas that we're looking at investing in, that we are investing in, to actually deliver the next part of the VCF 9.0 platform: infrastructure at the speed of the developer, private AI as a service, and cyber-resilient data. Hock talked about developers, and they need velocity. What causes velocity?
We've got to accelerate developers. We've got to help them move faster, that accelerated path to production. That's what creates velocity. What if developers got the autonomy that they need, but IT stayed in control? That's the shift. Developers get everything they want: speed, agility for their applications, business maintains security, trust, maintain the confidentiality of the business overall. The developers, I'm really happy to announce all of these new things that we are doing to accelerate that developer speed, that accelerate, that velocity of the developers, and the autonomy of them. All of these are new developer services. vSAN native S3 object storage built into the platform, available to all of you. Yes, block interfaces, file interface, S3 interfaces. Secure enterprise-grade Postgres and MySQL delivered by us. Storage database as a service available for all of you.
For all of those developers, full IaaS stack, all developer as a service. You're going to see it a little later. GitOps with Argo CD built in, so you can actually do CI/CD pipelines, application delivery. You can go from a Git-based YAML spec of all of your different applications to auto-deployment of those applications. Istio service mesh. You don't just need to build containers as a service. VMs, you've got containers as a service. I can also build function as a service, and I can actually interface those functions through a service mesh that we maintain and support for you as part of the platform. Policy is code. Everything is codified. Everything is written.
All of the specs in terms of your firewall rule settings, your load balancer rule settings, everything configured in so that I can store that in my Git, publish it, do my pipeline push, my CI/CD pipeline push, you name it. Hardened containers, more on that later. These six new features are going to dramatically accelerate developer productivity. The cool thing is they build on something that's already there in 9.0. vSphere Kubernetes Services, not just 9.0, it's there in 5.2.2. vSphere Kubernetes Services is native in the platform. This gives you full lifecycle management of Kubernetes. Most importantly, a CNCF compliant, right? Cloud Native Computing Foundation , the open source standard for maintaining Kubernetes. This is fully compliant with that, available to all of your customers. You get multi-cluster management so that you can manage across the different vCenter and different domain regions that you have for your environment.
Kubernetes requires a whole set of services to actually be complete. All of the standard Kubernetes packages are maintained and supported by us for you. All part of VCF, all included. That means you get Prometheus for monitoring. You get a Harbor Registry Service. You get Velero, so you can do backup of your container-based applications. Much, much more, right? Pinniped for identity management. Everything included. This is what vSphere Kubernetes Service is. VMs, containers, fully orchestrated with Kubernetes in the platform. I'm particularly excited about this announcement. This is huge. We're taking the number one private cloud, which is your beloved VCF. We're combining it with the number one cloud OS in the world. Anyone know what that is? It's Canonical Ubuntu. Canonical Ubuntu, with this partnership, all of you are going to get an integrated Ubuntu with full maintained security updates, maintenance updates included in the platform.
Long-term support included. You get these chisel containers. A chisel container allows us to actually build, we remove all the periphery libraries that are unnecessary for running that individual container. What have you just done? You've reduced the security risk for that container, and you do it for every container that you have, minimizing any security threats or risk. AI-ready images. You've got all of the vGPU drivers, so out of box, the fastest AI deployment possible. Together with Canonical Ubuntu and the VCF platform and a vSphere Kubernetes Service, this is the easiest Kubernetes deployment possible. It is Kubernetes at scale. It is containers and VMs in one platform, all orchestrated with Kubernetes. Whoo! Thank you. I should have paused and waited for a clap in the chair. It is very cool. It is kind of cool.
Instead of me talking about it, because it's much more fun to actually see a demo, which is why I have the shirt on. Let's see what this accelerated path to production is. Remember, developer autonomy, IT control. I'm the IT guy. I've done a setup inside the organization, multiple organizations. This is full multi-tenancy in the VCF platform: legal, finance, engineering, IT. I'm going to drill into the legal organization. You can see I've set it up across multiple regions so that they can deploy disaster recovery as they need to for their applications. I also have, I'll go over to the services. I've enabled a whole set of services that that development team can run. They can run independently of IT, but all of this through the policy control of IT. I've set up a Kubernetes service for them. They, of course, have a virtual machine service.
They have volume and network because they, of course, need to have the IaaS stack. I've also enabled some other cool things. I've enabled private AI. All of the AI services are available to them. All of those images are available to them. They have a registry service, the Harbor Registry Service, so they can manage their container runtimes. You will notice they're missing data services. Most applications actually need a stateful database. Why don't we enable some database services that IT thinks are the right controlled ones that their security teams have approved? We'll go and enable that. Go into the data services, right? This is database as a service. Go into the data service layer, add a new data service policy. This one I'm going to call the Postgres policy. You're going to guess which database I'm deploying. Deployed it into one particular region.
You can see I have a choice: MySQL, Postgres, or even Microsoft SQL Server. We can deploy and manage SQL Server instances too. I'm going to select Postgres. IT thinks that the only secured versions that have gone through their quality pipeline are versions 14 and 15, and you can allow minor versions of them. They are the only ones that right now are through the certification process and approved. We are going to make that available. At this point, everything's available. Database as a service. You'll notice that the data services is now an installed service ready for that tenant. Every tenant looks like this. Each of the tenants, you can decide what are the right services that get deployed for each of them. It's as easy as that. That is what a cloud looks like. That is a big change.
Now, to walk us through the developer experience, please welcome on stage our Senior VCF Technical Lead, Sabina Anja.
Good morning, everybody. Let's have a little bit of fun. In this scenario, my team is responsible for the legal oversight application at a large enterprise. You can tell I'm one of the developers. This application helps us move faster without any compliance issues. Now, before we ship a new feature, maybe a self-driving capability, a new data sharing model, or even an update to how we store things like customer contracts, we need to know, is this even legally allowed in every country you operate in? This is what this app does. It connects to a knowledge base of regulations, queries an AI model, and gives us a green, yellow, or red light based on region, product, and especially legal risk. To build it, I'm going to need a few things. I'm going to need a frontend. I'm going to need a Postgres database.
I'm going to need a GenAI backend to evaluate all of this data, and a set of Kubernetes services, clusters for the components of the application, and a Model Runtime managed in my Harbor Model Gallery. Here's the key. I don't want to wait weeks to get infrastructure provisioned. I don't want to write tickets for load balancers or firewalls and for it still not to work. I want infrastructure to be code. As a developer, this is my starting point. All the services I need, databases, Kubernetes, load balancers, even private AI, are right here ready to use. I have project spaces for my development and production versions of my applications. Inside the production project, I can spin up an application namespace for deploying my app. In the application namespace, I'll just fill in the basics: app name, region, and class.
The platform enforces the right policies behind the scenes. I can place my app across different availability zones. Each zone is an independent fault domain with separate power, network, and infrastructure. It's the same model the public cloud uses. Now I get that resilience in my private cloud. I've chosen multiple zones here, so I get resilience without ever needing to touch the infrastructure. I can select the level of isolation by choosing either a virtual private cloud that's dedicated or shared. Everything is in place for my application. From here, I'm going to shift to code because my deployment is automatic through Git. Let's go ahead and look at that application a little bit. I'm back in my ID. This is home base for me. I'm defining everything the app needs in code. As a developer, I've mentioned it before, I don't want to open tickets.
I don't want to be bounced around different IT teams. I don't want to hear the person is on holiday. I want to declare my security policies in YAML with what traffic to allow, what to block, and the platform will enforce it automatically. I'm defining my Postgres cluster. There's no need to deploy and configure a database. IT has set it up for me as a service. I have replication, storage, backup schedules, all of them captured in code. All of this means consistency every time with resilience built in. This is GitOps, right? The work is just code, commit. The update is tracked, it's reviewed, it's reproducible. That's it. You get eight YAML files, firewall rules, database configs, cluster settings, frontend deployment. All of this is versioned together. When I push, Git automation will take over. The platform will deploy the app.
It applies the policies, provisions the database. Its infrastructure is code, just like I expect. That is a lot going on there. It is also quick. What just happened? Argo CD saw my Git change. This is my CI/CD pipeline. My application, legal oversight, remember, the contracts review, that was out of sync, and auto sync kicked in. The desired state includes web service deployment, security policy, dev cluster references, and a Postgres cluster. The result? My app is healthy. My sync is okay. My frontend pods are running one out of one. From commit to a secured, fully running application and database, automated and auditable with GitOps. Once Argo's done syncing, this is where I'll go next. I'm going to come back into the platform because I want to get a view of how the app is performing, things like CPU, memory, even cost.
This is scoped directly to my application. If something spikes, I have direct visibility into the app, its resources, and its context. What is developer autonomy at the end of the day? Defining my application in code, pushing to Git, and the platform helping me, helping me take it from there. I get to stay in familiar tools, things I'm used to, VS Code, GitHub, and I get to consume approved services from the catalog. At the end, I see everything in one place. That is the developer view. We're now going to go back to the admin view and hand it back to Paul.
Wow. Isn't it great to hear an excited, happy developer? I'm the IT guy. They used to think of me as the Department of Motor Vehicles. Yeah, the DMV, as quick as that. Not anymore. Not anymore. Thank you, Sabina, for that. I should have said that at the very beginning. Anyhow, let's go back to the admin view. Here again, full visibility into it. I can see the quotas. I can see the region usage, memory, disk. I can see the clusters, the databases used. I can see, most importantly, costing. Let's drill into the legal application, right? We've just deployed that. You can see that on a particular basis, I can see the actual growth and trends over time of how that usage has been, and you can see the growth in that usage.
I can also drill down into the containers and the VMs that are used by that application. This isn't just about chargeback, costback, and how can we actually allow empowered developers and better resource management. All of that is true. I also get full visibility into the application. Imagine how that helps you on diagnostics, working with your application teams. Things have gone wrong. How to actually root cause issues. Amazing. What you've seen is the accelerated path to production. You've seen from the IT side the complete control that they need to manage their environments. You've seen the developer, the happy developer, Sabina, the happy developer, because she's got what she needed: automated delivery of applications, Git-based pipelines, CI/CD delivery. This is a big C change, the accelerated path to production. That's one area.
For the next investment area, private AI as a service, please welcome on stage Chris Wolf.
All right, TikTok. We got a lot to get over in this conference, so I'm going to get right to it. I want to start off with the legal oversight app that you've already seen. Now here's the thing. Our worlds are never perfect, right? You saw this great scenario, but like, come on, like in your world, something always goes wrong. This is where I'm happy to share with you the VCF Intelligent Assistant. VCF Intelligent Assistant is an AI-integrated chatbot where you can ask questions, you can start to get help and more easily resolve any of the support challenges that you might run into. In this case, we're going to go ahead and start to ask about like, hey, you know, our legal oversight app, we're running into some performance challenges. What are some things that you might suggest? We get some responses here.
Okay, you know, private AI GPU dashboard, that's a pretty good place to take a look. Cool. You know, how am I going to get to that? Let's find that out. Let's go ahead and head over to the dashboard in VCF Operations. Surprise, right? We got a red area. There's a problem here. We got a hot spot. We want to be able to load balance this. This is an AI application. Is this something that I can vMotion to rebalance out my cluster? Let's go back to the chat assistant and let's find out. Here we go here. Now we're getting more information. Notice what you're seeing in terms of the sources here for our explainability. It's our docs, it's our KB articles, it's even blogs. Yes, of course, we're also indexing William Lam's blog. Don't worry, we got you covered. We go from here now.
We can see that we can do this. Let's go to our vSphere client. The vMotion completes. These are large language models, large GPUs, major data sets. We can vMotion AI workloads just like anything else, which is pretty, pretty freaking cool. We go back to VCF Operations dashboard. Lo and behold, things are looking great. This is the first thing I wanted to share with you. There's a lot more to come here. I wanted to step back, though, and talk about Broadcom. People often ask, what's Broadcom's leadership role in artificial intelligence? What's Broadcom about? We can break it down into a couple of key points. When you think about open ecosystems, you should be thinking about Broadcom. Our Ethernet business, and Ethernet is backending the largest AI hyperscalers in the world today. You think about Broadcom, you should think about interoperability. You look at VMware Cloud Foundation.
You look at the choice of hardware that you have below the stack for your AI workloads and the choice of models and services you have above the stack. When you're trying to bet on an uncertain future, the best place to bet with and the best partner for you is going to be Broadcom. I'm happy to share more work that we've done with NVIDIA as our partner. When you look at those open ecosystems, NVIDIA has been key to our AI journey for a number of years. You see a number of announcements here that we're happy to share with you today. This includes additional GPU support. The Blackwell B200s, the RTX Pro 6000 GPUs, ConnectX 7 and BlueField 3 NICs. This gives you Direct Path IO, gives you GPU Direct RDMA, GPU Direct Storage, GPU Passthrough support as well.
Finally, something you may not be aware of is our HGX reference architecture. A lot of you are consuming AI infrastructure or purchasing AI infrastructure through your OEMs, and you're buying that HGX form factor. You can put VCF on that, extract all the value, and move forward as well. It doesn't just stop there. Last year, we announced our partnership with Intel and Gaudi 3 support. I'm happy to share with you today that we're taking that ecosystem one step further with support and a partnership with AMD as well. We will have virtualization enablement and support for the MI350 GPUs going forward. This has given you the enterprise software stack and that open ecosystem around this as well. Again, more choice for you. This has given you your choice of accelerators and anything that you're looking to do with AI now and in the future.
If I pause, it was three years ago. Time flies. Three years ago, we introduced private AI at this conference on this stage. What's happened since then? The world has caught on to the notion that you can bring your models to wherever the data resides. You can run those models at a lower cost without having to sacrifice privacy or control of your data. This isn't just our vision anymore. Even the hyperscalers are doing it too. We're happy to have them with us on this journey. Where we differentiate is we are committed to choice of AI models and services, choice of hardware going forward. You're not having to buy all of these siloed AI appliances. You can bet with us on a common AI platform. That is going to just enable and unlock choice now and in the future.
Customers have been on this journey with us as well. I am happy to share that over the past year, we have onboarded more than 80 customers. This includes a lot of household names. Several of these are partners. They are here with us today. I would like to thank Mark Rahm is here. Keith is here. This is U.S. Senate Federal Credit Union, University of Texas, University of Bristol. There is a lot of good momentum across a large number of industry verticals, and that is continuing to move forward at a really fast pace. The innovation has not stopped there either. There is a lot more here. Some key things I want to highlight: we have shown you some of the things that we are doing in infrastructure already, and our core platform is really going to continue to unlock a lot of that flexibility for you.
New innovations coming are not just model context protocol, but ensuring that you have a secure identity chain, ensuring that you have secure role-based access controls as you are bringing these different data feeds into your AI services. Our multi-accelerator Model Runtime means you can deploy a model once. I can change accelerators. This could be AMD GPUs, NVIDIA GPUs, even CPUs, and I will not have to refactor my application. Finally, with multi-tenant models as a service, I can load a single copy of a model into one or more GPUs. I can share that among multiple lines of business or multiple tenants while keeping all of the data private. This will further lower your costs for AI services as you run them internally, and it gives you the equivalent of what the hyperscalers are doing in the public cloud.
How many of you are saying, "Chris, this is amazing? Like, how do I get it? How do I get it?" You might also be saying, "You know what? I have that special someone. The holidays are around the corner. They seem to have everything. What should I do?" I have the answer for you. We are now bundling private AI services in VCF 9.0. That is what I am talking about. Let's go. Let's go. Yeah. I have kind of given you the taste of it. I want to give you the full meal now. There is no better person to show you how all of these services work than the Engineering Leader whose team has built these. I would like to welcome Tasha Drew to the stage.
Thanks, Chris. Hi, everybody. Today, I am super excited to give you a quick tour of some of the capabilities of private AI services and a behind-the-scenes look at how we are using those services to deliver intelligent assist for VMware Cloud Foundation, which Chris just demoed to you. The first service we're going to look at is Model Gallery. As your organization scales its AI workloads, your developers are going to want access to the latest cutting-edge upstream and open-source models. This introduces an immediate enterprise governance problem the Model Gallery service is designed to solve. This service gives you tooling and workflows to safely connect to popular model registries on the internet, download models, and then security scan and validate the behavior of those models.
Once you're satisfied in the model's provenance, we repackage the model for you so you can upload it to your internal Model Gallery and share it with the appropriate teams and users using your organization's role-based access control. Here you can see my team's Model Gallery for intelligent assist and some of the other services we're developing. Now that you have models safely imported and shared, you're going to want to be able to deploy those models as a service for your organization. To help you with that, we've developed the Model Runtime service. From directly within VCF, you can select the model you want to deploy, pass in specific runtime flags, and you're off to the races. Here you can see my team deploying the Qwen2 embedding model, which is what we're using for intelligent assist.
On the right-hand side of the screen, the YAML documents are being automatically created so that you can save this deployment in your CI system for quick and easy recreation. Your deployed models are running behind the ML API gateway. Your users continue to interact with the model via the APIs they're used to, but you have the operational flexibility to scale models up and down horizontally based on load or do a rolling upgrade with no end-user impact. Now that you have models as a service running, your users are going to want to use those models to deliver retrieval augmented generation application, or RAG apps. In this architectural pattern, developers instruct a model to compose its answers only using a set of documents that have been provided to it.
However, when we talked to our customers, we found that while they saw tremendous value in RAG apps, they were struggling to reliably get their documents out of where they were stored and correctly processed and stored in a vector database. To meet this challenge, we built the Data Indexing and Retrieval Service. This service provides data connectors for popular document locations like Google Drive, Microsoft SharePoint, and Confluence. You can select which documents or folders should be provided in a knowledge base for the RAG app, and we take care of processing those documents. You can also set a regular refresh policy to make sure your AI application's data stays fresh as the original documents are updated. Here you can see the documents my team has processed for the intelligent assist. The final service I'm going to highlight today is Agent Builder.
Agent Builder is a higher-level service you can provide to your developers and data scientists, where they can come to a UI, see the models you're running for them, and the knowledge bases that are available to them to use. Users can then provide specific prompt instructions to the model, manage their tools and knowledge base settings, and quickly test out the agent they've created, allowing for a fast inner development loop. Once you're happy with the agent you've created, here you can see us testing out the intelligent assist for Chris's demo. Your agent is saved, and you can use it as a backend service to power your AI applications. Thanks for going on this quick tour of private AI with me. If you'd like to learn more about how to deploy and use these services, please check out this blog for step-by-step instructions. Now back to you, Paul.
Thank you, Tasha. Private AI Foundation is now included. How about that? There's a third area, cyber-resilient data. Security resilience is no longer a checkbox. It's imperative. We've seen it across industry. One breach, one outage, one ransomware hit, and suddenly it's not your systems at risk. It's your customers, your IP, your reputation, even your license to operate. Marks & Spencer , back in May, had a $440 million loss, weeks of downtime online and in store. A leading technology company, Snowflake, had a credential attack. It impacted 165 companies that were using their service. Government and public sector isn't immune to this either. The U.S. National Government Public Database had 2.9 billion records, probably all of your records, social security numbers, usernames, passwords, and more. Billions of records, billions of losses. VCF is your secure foundation.
Built into the platform today, we already have multi-factor authentication, encryption at rest and in motion, secure network zoning, live patching so you can update from those CVEs, and much, much more. We extend that with our vDefend and AVI work, which looks at runtime security. How do I protect applications and look at actually what's happening on the network? In real time, get zero trust security, deep threat visibility, and web app protection, meeting your PCI and HIPAA compliance needs. Of course, our Tanzu. I've got to protect the developer pipeline. We can do that. We cannot just help you protect it. We can make sure that CVE remediations are done with ease because you can push code with ease. You can implement guardrails within your applications and full automated builds and service controls. We're not stopping there. I said we're innovating.
I'm announcing today our new VCF Advanced Cyber Compliance. This new extension to VCF provides a very, very powerful capability, complete continuous compliance enforcement, not just for your VCF environment, but more importantly for your applications. Based on our SALT technology, but fully available for all of you. We have enhanced platform security, looking at how do we protect the resiliency of the platform itself, secure by design container images, confidential computing built in across AMD and Intel-based environments. We do proactive assessments to monitor and maintain your environment for you. Of course, if things go wrong, full automated ransomware and data recovery. All of our VLR, live recovery capability, is part of this advanced cyber compliance. You can do disaster recovery and compliance recovery, and clean room recovery. We've covered a lot. You've seen all the work that we're doing on engineering the next generation of VCF.
We're not just talking about VCF this week. There is a whole load more information. As I like to say in Ireland, there's a shit ton more information that you're actually going to hear about this week. Hopefully you enjoy it. It's really, really exciting times. While we think about it, we've covered a lot. Innovation in the data center, innovation for AI, innovation for security. We've heard from customers, just like all of you, who are building their private cloud and seeing real results. Before you go, I want to kind of share with you one thing. Standing here today, I kind of think of the journey we've been on together. Twenty-five years ago, when we introduced server virtualization, people thought we were crazy. Why would you want to share a server? You, all of you, the people in this room, you saw the potential.
You transformed server economics forever. You transformed the data center through encapsulation and standardization of applications. We didn't stop there. We extended virtualization to networking and storage, delivering on the promise of the software-defined data center. We had our doubters. All of you proved them wrong. You redefined the data center, making it more flexible, more efficient, more resilient, software-defined. Now we're at the next inflection point, the modern private cloud, an agile, secure, and cost-efficient cloud for all applications deployed anywhere. You, you're not just IT practitioners. You're the architects of the future. You're writing the next chapter of the data center. Remember this. You're not just implementing technology. Together, we are redefining IT. Thank you for being here, and have a great week.