Welcome, welcome. Welcome to the launch of Intel's 4th generation Xeon processor and our Max family. You know, today isn't just about announcing a product, it's about showing how customers, industry, are already taking advantage of this amazing technology. This launch is very real, right? OEMs, our cloud providers, our developers are already embracing and delivering the 4th-gen Xeon and Max family of processors. The 4th-gen and the Max family delivers extraordinary performance gains, efficiency, security capabilities, breakthrough new capacities in AI and cloud and networking, and delivering the world's most powerful supercomputers that have ever been built. It truly is a pleasure to be back in person and interacting with you and participating in today's announcement. It's also great to hear from our industry colleagues.
You know, people that we've worked with for decades, who've been partners in technology, you know, with us. We wanna hear about how they're excited about today's announcement and today's technology. After we finish up with them, then we're gonna hear from our very own Sandra Rivera , as she launches the fourth gen and Max series of processors. With that, let's hear from our industry partners now. Thank you so much for being here in person for today's announcement. For those online, today is a great day. Play the video.
Dell Technologies and Intel have a long and successful history of innovation. We are very excited to continue our collaboration with the launch of Intel's fourth generation Xeon scalable processors. These processors will power our new sixteenth generation PowerEdge servers, reimagined for the rise of multi-cloud, the edge, and the incredible amount of data that needs to be stored and secured.
Today, we live in an increasingly hybrid world where enterprises are becoming edge-centric, cloud-enabled, and data-driven. HP and Intel have delivered breakthrough innovation for decades. We just launched the new HPE ProLiant Gen11 servers and HPE Cray supercomputers, built with the new fourth generation of Intel Xeon scalable processors, pushing the boundaries of performance, enabling AI scale, and fueling innovation.
Lenovo and Intel have enjoyed a strong and successful collaboration across the new IT architecture. As one of the fastest growing infrastructure solution vendors, Lenovo is excited to participate in the launch of the fourth generation Intel Xeon scalable platform, which will power Lenovo's new infrastructure solutions, including ThinkSystem, ThinkAgile, and ThinkEdge.
With this launch, we are so excited to bring Intel's fourth generation Xeon Scalable processors to Cisco's newest line of UCS M7 servers, including UCS X-Series, a first of its kind modular system managed by Cisco Intersight and our most powerful energy-efficient server yet. Together, Cisco and Intel are creating significant flexibility, performance, and sustainability benefits for our customers.
We support Intel's 4th generation Xeon Scalable processor, codename Sapphire Rapids, and Max Series with over 20 new product lines and rack scale plug and play total IT solutions to deliver the best performance per watt green computing solution. We offer rack scale solutions incorporating server, storage, networking, security feature, software and service.
Cloud and AI are revolutionizing how companies do work and how their products work. Computing demand is growing exponentially. Data centers already use 4% of the world's electricity, up from 1% just five years ago. This growth isn't sustainable. To change this trajectory, we must accelerate every application possible. NVIDIA is dedicated to accelerated computing. One NVIDIA accelerated server can reduce the processing time, energy, and cost by X factors. We are delighted to pair Intel's 4th generation Xeon CPUs with NVIDIA H100 GPUs and ConnectX-7 networking for our new generation of cloud, supercomputing, and AI systems.
Please welcome the Executive Vice President of the Data Center and AI Group, Sandra Rivera.
Hello, and welcome to the launch of our latest generation of data center solutions. Today represents an important day for Intel and for our customers, and I'm delighted that you're joining us to celebrate this moment with us. For over two decades, Intel technology has represented innovation in data centers and networking equipment, and in cloud computing infrastructure across the globe. We work closely with many of you to solve the greatest computing challenges at scale. Today, we're building upon that foundation of innovation by bringing to market a highly differentiated portfolio of data center solutions that will enable us to create and innovate the next generation of solutions so businesses continue to evolve and grow. You are the critical partners working with us to invent and deliver the future of technology.
When I joined Intel in 2000, the cloud was giving us an early glimpse of how technology would impact all of our lives. Today, data is being used to transform entire industries. In healthcare, medical researchers are using thousands of datasets from patients in order to identify and predict brain tumors. In agriculture, farmers are using autonomous solutions in order to boost crop yield. In retail, online search data is being used to serve up better recommendations for consumers. The world as we know it is evolving before our eyes, it's exciting to see our customers tackling these complex challenges and pushing the boundaries of innovation. As organizations look to scale, drive down costs, and generate new revenue streams, it's more important than ever that businesses have the technologies they need that drive the highest business value.
For three generations of Intel scalable processors, we've architected our data center solutions with your business needs in mind. Our unique approach is focused on providing real-world, purpose-built workload acceleration that delivers superior system performance at greater efficiencies. Purpose-built workload acceleration is not just about adding more cores. It requires a true system-level approach in which highly optimized software is tuned to the differentiated features in our hardware in order to accelerate the most critical business workloads, including AI, networking, HPC, and security. Today, we're taking purpose-built workload acceleration to the next level. I'm excited to unveil Intel's 4th-gen Xeon scalable processors. She's a beauty. These processors represent a paradigm shift in the way businesses run their workloads and solve their computing challenges. 4th-gen Xeon features the most accelerators built into a data center processor in the industry, providing leading performance and workloads that matter most.
We've demonstrated an average of 2.9 times increase in performance per watt efficiency across a group of the most common workloads. These aren't just industry benchmarks, these are real-world workload performance gains. When we look at total cost of operations, with acceleration, customers can expect to see TCO improvements ranging from 52% to 66%. These step function performance gains are only possible because of the differentiated features we built into 4th-gen Xeon. These allow customers to make optimal use of core resources for greater CPU utilization and improved power efficiency. For workloads like AI, customers can achieve up to 10 times higher PyTorch real-time inference and training performance using Intel Advanced Matrix Extensions. In networking applications, customers can encrypt data using Intel Quick Assist Technology to achieve up to 47% fewer cores at the same performance level.
This represents the future of purpose-built workload acceleration. We could only achieve these gains with close collaboration with the world's leading software providers, OEMs and ODMs. There are thousands of software engineers at Intel working with our ecosystem partners to optimize and validate applications to run best on Intel hardware. Xeon-based infrastructure is available virtually everywhere in the world. Today, there are over 100 million Xeons installed in the market, from on-prem data centers to 5G networking equipment, to edge infrastructure and of course, in cloud service provider services. Within the cloud, Xeon is the most ubiquitous processor in the world. Xeon-based services are available from virtually every cloud service provider in every global region, and this pervasiveness ensures customers get consistent performance and reliability when running their workloads in the cloud.
Let's hear from AWS about the new business services they're offering based on our latest data center solutions.
I'm Dave Brown, Vice President of Amazon EC2. It's a pleasure to be here for the launch of the fourth generation Intel Xeon scalable processors. AWS Intel have been on a journey together innovating for customers since AWS introduced the concept of cloud computing with Amazon EC2 in 2006, over 16 years ago. Since then, customers have launched over 30 billion EC2 instances. AWS has built the most reliable and secure global cloud infrastructure with the broadest and deepest portfolio of instances. Amazon EC2 provides customers with the most options for compute so they can choose the right infrastructure for their application and business needs. To date, AWS has introduced over 400 Intel-based instances optimized for a wide variety of applications. At our recent re:Invent conference, we launched our latest Intel-based instances, including network optimized instances, as well as an optimized offering for HPC workloads.
Customers such as Nasdaq, FanDuel Group, Dana-Farber Cancer Institute, and Nissan Motor are examples of customers who have benefited from the performance, reliability, availability, security, and scale of AWS infrastructure with Intel Xeon scalable processors for their applications. AWS is excited to continue our collaboration with Intel with our support for fourth generation Intel Xeon scalable processors for the Amazon EC2 portfolio. Amazon EC2 R7iz instances, already available today, use the fourth generation Intel Xeon scalable processors and have the highest performance per VCPU among our X86-based EC2 instances. They also deliver up to 20% higher performance than comparable high-frequency instances and are ideal for electronic design automation, relational databases, data analytics, and other workloads requiring a combination of high compute performance and high memory.
Intel is also a customer of AWS, including running their electronic design automation workloads on AWS, and we are excited for Intel to utilize the latest Intel Xeon scalable processors on AWS for these workloads. Congratulations to the Intel team on this important launch milestone. We at AWS look forward to our ongoing collaboration with Intel to deliver more value for AWS customers.
With 400 Intel-based instances, Xeon is the most widely deployed platform in AWS's infrastructure, allowing customers to access the unique features in our hardware from every region in the world. In addition to AWS, many of the world's leading cloud service providers already have 4th gen Xeon services in public preview and general availability. These include Alibaba, Baidu, ByteDance, Google Cloud, IBM Cloud, Microsoft Azure, Oracle Cloud, and Tencent. I think you'll be interested in what they have to say.
At Google Cloud, we're committed to providing our customers with infrastructure choices that are optimized for real-world workloads so that they can get the best experience and the most advanced capabilities. Our deep collaboration with Intel helps us meet this objective. As we look to the future, we look forward to our customers taking advantage of the new Intel AI accelerators and security features enabled on this latest Intel Xeon processor, AMX and TDX. We value our close partnership with Intel and look forward to collaborating on new Google Cloud offerings to deliver increased performance, greater efficiency, and functionality.
We are excited to be one of the first global cloud providers that will offer 4th generation Intel Xeon scalable processors. The expansion of Intel Xeon technology in IBM's cloud infrastructure and software opens up what is possible for high-performance computing and a wide variety of both artificial intelligence and other enterprise workloads. The era of secure high-performance.
Cloud infrastructure to help our customers across a wide range of enterprise workloads. We'll be supporting the new fourth generation platform across all of our cloud compute capabilities. Customers such as FedEx have found that the combination of OCI and Intel technologies provide them with the flexibility to manage their capacity during peak season surges.
When I talk with our cloud partners, data security is top of mind. As the compute landscape evolves to become more distributed with workloads moving across ecosystems and multi-tenant environments, protecting data at every step of the workflow is paramount, whether it's data at rest, in transit or in use. Intel and our ecosystem partners pioneered a data privacy technology called Confidential Computing. This is to enable businesses to handle their sensitive data in cloud environments. Intel is the only silicon provider to offer application-level isolation with Intel Software Guard Extensions, which provide the smallest attack surface for confidential computing, whether that's in private, public or edge computing cloud environments. Now with 4th gen Xeon, we're delivering a new level of virtual machine isolation technology called Intel Trust Domain Extensions. This technology is ideal for porting existing applications to run in a confidential computing cloud environment.
Microsoft Azure, Alibaba, Google Cloud, and IBM Cloud will be the first to offer Intel TDX ahead of general availability. Here to share more about how they're using this innovative technology is Microsoft.
Across industries, Azure customers have been using Confidential Computing to achieve new levels of data privacy. The Intel third generation Xeon scalable processor with Software Guard Extensions has been a foundational element of Azure Confidential Computing since the beginning.
We look forward to being one of the first cloud providers to offer confidential services based on Intel 4th generation Xeon Scalable processors with Intel® Trust Domain Extensions later this year, enabling organizations to achieve confidentiality by seamlessly lifting and shifting their workloads without requiring any code changes.
Thank you, Mark. Intel and Microsoft have been great partners in delivering impactful technology to the world for many years. Speaking of impactful technologies, artificial intelligence is the most disruptive and transformative technology in computing today. AI is helping industries across the globe process data, make sense of, and make better decisions from the massive amounts of data being generated. For generations, we have built in AI accelerators into the world's most widely deployed data center processor. With workloads, we are working with the ecosystem to ensure that AI frameworks and AI software is optimized for Intel hardware and is readily accessible. This work is helping us democratize AI and ease its deployment across all industries.
My team has played a major role in Intel's AI journey, and to talk more about how we're making this transformative technology broadly available, please welcome Corporate Vice President and General Manager of Intel Xeon products, Lisa Spelman .
Hey, Hind. Oh. It's good to be here.
It's great to have you here, Lisa Spelman. How many Xeon launches have you now been a part of?
Okay. Well, unbelievably, this is my 8th Xeon launch. I have to say that every single one gets more exciting. You know, you mentioned that democratization of AI. That's possible because of the accessibility of Xeon. This has been a multi-year journey for us, working with our customers and our partners. In fact, in 2017, that was when we first introduced built-in AI acceleration to Xeon. Since then, we haven't stopped. Every generation, we're adding to that hardware acceleration. More importantly even, we are working really hard and closely together with the software ecosystem to deliver that out-of-box ease and performance. It goes up a notch with the 4th Gen. We're taking it to the next level. Everything that's been added to 4th Gen Xeon is because of our customers' inputs and their requirements.
That ubiquity of Xeon, plus the artificial intelligence performance that customers now can access, means that we're delivering today this wonderful opportunity for them to deliver both deep learning training and inference on the CPU that they already have in their environment.
With this progress, we've reached a tipping point with widespread deployment of artificial intelligence on Intel architecture. We recently sat down with Fujitsu to talk about our work together and how AI is accelerating their digital transformation.
Let's listen in.
Thank you, Lisa, and greetings from Munich, Germany, where I'm here at the offices of Fujitsu, meeting with my colleagues, Jochen and Alex.
Hello, Hind. Welcome to Fujitsu Munich.
Hi.
Welcome, Hind. Great to have you here.
Excited to be here.
Let's head for the meeting room. Please follow me.
Jochen, tell us, how do you start your DX projects?
For the fast and cost-effective development of AI and to process a large amount of data, our customers can leverage the Fujitsu DX Innovation Platform. Together, we are able to develop an AI end-to-end solution to support our customers with their business strategies.
Sounds great. Can you give us an example of that, Alex?
One customer, for example, they wanted to improve their customer services department and at the same time streamline their processes. The DX Innovation Platform and the Sentiment Analyzer from Fujitsu was a very good fit for that. Let me show you the latest and newest version of the Fujitsu Sentiment Analyzer. What you can see here is now feedback from social media about a certain topic that might be interesting for you, about your products, about your services, and it shows you in those graphs, if this topic is perceived rather positive or negative to the market. We co-design digital transformation projects and engage then ecosystem partners like Brainpool for joint development and then joint implementation. Let's just ask Kasia Borowska from Brainpool about her perspectives. Hi, Kasia.
Thank you, Alex. With Sentiment Analyzer tool, for example, that we developed jointly with Fujitsu, we were able to engage some of the best engineers and machine learning specialists who leveraged the Intel OpenVINO and Hugging Face pre-trained models, which saved us huge amounts of time on the AI development.
Thank you very much, Kasia.
In our example, Fujitsu and Brainpool were able to put to production a German sentiment analysis model using OpenVINO for software optimization and using our latest 4th gen Xeon Scalable processor as our hardware. This helped improve the inference performance by leveraging Advanced Matrix Extensions. We are looking at around 4 times the performance improvement from gen-over-gen. Intel recognizes that to be successful with AI, developers need an integrated AI platform with hardware, software, as well as a partner ecosystem. This combination is what really helped Brainpool and Fujitsu remove the complexity that is sometimes inherent to the AI and accelerated the building of AI and the deployment on the Fujitsu DX Innovation Platform.
The Fujitsu DX Innovation Platform is made for a fast and flexible realization for extraordinary results in AI projects. Built-in accelerators and Intel OpenVINO kickstart our projects. The good thing about OpenVINO is that it comes with pre-trained AI models for all our AI projects, and we can run those projects without discrete accelerators. AI is meaningful for me because it gives us the possibility to combine latest AI technology from Fujitsu, from Intel, from Brainpool, from our ecosystem partners, and help our partners and customers to become data-driven and more resilient.
Now back to our studios where Udo and Sandra will continue the conversation.
Udo, thank you so much for joining us here today.
Yeah, thank you, and it's really a honor to be here at Intel.
Udo, what were your technology requirements for defining the platform that you're offering to customers?
On one hand, of course, low latency, because when customers are using the platform, it has to be very responsive. Energy consumption, because this is playing a major role. We are facing climate change everywhere. Energy consumption has to be reduced. I think therefore, the new 4th Gen Xeon platform will be a great step forward in this respect. What we have seen with the 4th Gen Xeon is really now the capability that you have dedicated accelerators in place. We have accelerators for AI, which is very important for us, so making sure that we can bring the workload to dedicated unit on the chip and really making sure that the CPU can do whatever they want. Having support from the framework perspective, which means support from TensorFlow, PyTorch, OpenVINO, MXNet.
We see also with the new 4th generation of the Xeon processors an enormous speed up because with the tile technology and that you have dedicated accelerators to multiply those metrics, making sure that the data can be moved very quickly from one point to the other. This allows us really to squeeze the most out of the CPU. What we see in most of the cases, the CPU is more than good enough to do training and, of course, inference as well. This is, yeah, what we are targeting for.
Yeah. Well, that was part of the architecture decisions that we made, which is how can we incorporate more acceleration into the base CPU platform. What are the ways customers are using the platform?
One of the project I really would like to highlight, because I'm involved in this project is ASFINAG. ASFINAG is a company's operating and maintaining highways in Austria. What we would like to do with this R&D project we are doing with ASFINAG is really detecting electric vehicles. Unfortunately, there is no standardization in the European Union. In Germany, we have an E at the end of the number plate. In Austria, it's a green color of the characters of the number plate. Which means we have to detect an attribute of the character, which is the green color. There are also some emblems on the plate itself, and this is not an easy task. We have seen in the first stage that we were capable to do 2.3 detections per second. This was okay-ish.
We have a chat with the colleagues here from Intel, and they said, "Yeah, we can improve this." They have done an improvement with OpenVINO, and all of a sudden we are now capable to achieve an detection rate of 30 images per second with the existing Xeon technology. With the new Gen four, we are expecting a significant improvement because the numbers that we have seen from Intel, they are really, yeah, unbelievable.
All great examples of where we are having a positive impact and driving that time to value, accelerating that pace of innovation. What are you excited about in terms of the future?
What we see right now is there is a hurdle, from a customer point of view for the first step into AI. I would call the new Xeon generation really democratizing AI because you must have a server in place already. When you have all of those capabilities, you have accelerators, it's super fast, and you can handle the data, you can use cryptography, compression, decompression, all of those aspects.
Yeah. Well, that is our purpose, our mission to democratize AI, to drive outcomes that improve the lives of every person on Earth. Udo, thank you again for talking with us today and for sharing so many great examples of the high-impact work we're driving together and how we use technology for good.
Yeah. Thank you for the opportunity being here.
This is just one example of our customers already building solutions on 4th gen Xeon, and they're taking advantage of that acceleration, AMX, the crypto acceleration, and also that software that I referenced. With this improved performance, you can deliver deep learning and inference workloads like natural language processing, recommendation systems, image recognition. Gen over gen, we deliver 55% lower total cost of ownership for real-time AI inference workloads. It's impressive performance paired with TCO that makes Xeon that industry foundation for inference. With AMX, Xeon is quite capable for training as well. Xeon has been the foundation for scalable high performance computing for decades. For the first time, Intel is integrating high bandwidth memory into the Xeon processor and delivering Xeon Max.
This innovative product addresses the number one constraint in High Performance Computing solutions, and it dramatically increases the memory bandwidth that's available and accelerates key High Performance Computing workloads like life and materials sciences, manufacturing, high energy physics. For years, compute capacity has grown at a rate much faster than the memory bandwidth. This has led to a situation where you have workload performance that doesn't keep up. It's stranded compute. We haven't been feeding the cores enough data. Well, this has been an obstacle to progress. It leads to wasted compute cycles, wasted energy and costs. Xeon Max is the first and only X86-based processor with integrated High Bandwidth Memory. And for all you software developers out there, one of the most exciting things is that you can take advantage of all of that bandwidth without having to do code changes.
The product brings 3.7x performance improvement across real-world workloads and requires 68% less energy than deployed competitive systems. Xeon Max provides you with the opportunity to maximize your bandwidth, maximize your compute, and maximize developer productivity. Now, Jim Lujan from Los Alamos National Labs is gonna share with us a bit about how they're solving real-world problems using both Xeon 4th gen and Xeon Max.
Hey, Jim, it's great to see you. Can you tell me a little bit about how the lab uses high performance computing to advance your mission?
Absolutely. It's good to be here, Jeff. Thank you for having me. At Los Alamos, we use modeling and simulation for a variety of areas, primarily stockpile stewardship, but we also do modeling and simulation in epidemiology, planetary defense, climate modeling.
What are some of the key bottlenecks that you experience in those workloads?
For us, we've done an analysis on some of our key applications. Our applications, while they do a lot of compute, are not as restricted by compute as they are with memory. When we put together the Crossroads procurement, we were looking for ways to improve time to insight, to really reduce those bottlenecks on memory bandwidth. Intel's Xeon Max processor with that kind of High Bandwidth Memory was really exciting for us because it really does improve that time to insight that we were looking for.
The time to insight is really a combination in my mind about the runtime, time it takes to actually run the execution of that program, as well as the code porting effort to bring that existing code onto a new platform. What level of challenges do you face to bring existing codes onto Xeon Max?
That's a good question. The Intel Xeon Max processor, you really can just move from an evolutionary perspective of where we're at today onto this new processor. For our application developers, it's been relatively trivial. We have examples of where we've literally taken an old binary from one of our existing Xeon machines and put it on an Intel Xeon Max processor, and we've just been able to run it. We're just really thrilled with the ability to bring forward, you know, the tens of millions of lines of code with relative ease into this new technology.
Well, it's wonderful that it can just work right out of the box, but it really also comes down to what performance are you able to get when you bring that code forward. What are your earlier experiences like?
We've done some preliminary testing with the Xeon Max processor with HBM, and we're extremely encouraged by that. We're averaging around a 4X improvement overall with our particular applications. We're really pleased to be able to essentially recognize a 4X improvement over our existing performance with basically, you know, buying a new chip. We're just really thrilled with the partnership that we have with Intel. We've been working with you guys for quite some time overall on the Crossroads project, and we're just really excited to be a part of bringing this product to market.
Thank you, Jim. We're very excited as well, and it's a great partnership and hearing from your engineers and scientists on the requirements and letting that really define our roadmap to solve your problems, that's really what brings joy to us. Thank you so much.
No, thank you, Jeff.
We also bring together acceleration with optimized processors for key networking workloads to offer lower latency, higher throughput, and that deterministic performance that's so important. We offer this with a range of core counts and power envelopes. With 4th Gen Xeon, communication service providers are able to double their performance per watt to meet their critical quality of service, scaling, and energy requirements, and this is for their vRAN workloads. They will see also a 30% gen over gen performance improvement for their 5G core workload. This improvement is delivered through new instructions, but also through our next generation platform, which delivers improved memory and PCIe bandwidth. It's all delivered in the same power envelope as the prior generation.
We sat down with Telefónica's Chief Technology Officer, Enrique Blanco, to hear a bit more from him about what performance means for their offerings, for their customers, and for their business.
Hello, Enrique. Thank you for joining us today, and thank you for being a long-term collaborator with Intel.
No, thank you. Thank you very much for the opportunity.
As we know, networks have been adopting cloud technologies for the past decade. How have these changes impacted how you've designed and built out your 5G network? Why is this so important for the future of networking and your customers?
We are building a fully softwarized network. This is our first step, not only in the IT evolution, which is the first step, it is in all the platforms that we are today building as a pure cloudified approach. It is the access. The OLTs and the base one, it is going to be fully centralized and virtualized. It is the only way to reach the final approach that we are looking for in architecture.
Today, Intel is celebrating the launch of our 4th-gen Xeon Scalable processors. What excites you the most about this latest generation of processors?
Today, when I see Open RAN, vRAN, CNF, reducing the power energy consumption and going through additional throughput, I can only get this platform, I can only get this architecture if I'm using your new fourth generation. This is because I persist something similar. You are providing me the technology that I need, trying to make my dream. It is how can I fully virtualize? How can I open all the interfaces? How can I connect hundreds of antennas to a centralized approach? I can only do it if you are doing your homework. This is translated for me the fourth generation of Xeon. It is great for us. This is your magic weapon that it is helping us trying to reach our final architecture.
It is a matter of trust because this long-term partnership means trust. In our most advanced step in the Open RAN capabilities, we trust in Intel.
What are some of the things you are doing to drive more energy efficiency and sustainability in how you build and operate your telecommunications network?
In the past six years, Telefónica has been reducing at 7% the total power consumption. We have been multiplying by seven the data traffic, and we have been reducing at 7%. This has been because we are using the right technology, and we cannot do it if we are not going by the hand with your evolution of the microprocessors, that it is helping us trying to go faster, multiply by two by three the power capabilities, and reducing the energy consumption.
As we look ahead, can you talk about some other areas of collaboration between Intel and Telefónica that you're looking forward to?
We are cooperating in 5G. We are cooperating in Open RAN. We are cooperating in Edge. We need to maintain our cooperation in the softwarization, using all the capabilities in the containers. You are in the core of our evolution. You are in the core of our automation. We are serving and improving the lives of the people that we are serving. When you are fully aware of this, there is something who appears. It is the passion. Everything you are doing every day has a huge impact in the life of the societies whom you are serving.
Well, thank you, Enrique. You certainly, and Telefónica, have had a huge impact on society, and I really appreciate you joining us today. Thank you so much.
Networks enable connectivity and access to the digital world. In these critical network workloads, the addition of integrated QuickAssist Technology allows companies to increase their compression and their cryptography capabilities, which then in turn allows organizations to increase the amount of encryption they provide without having to sacrifice on performance. The network core and RAN, the foundation for 5G, run on Xeon due to multiple generations of investment in both Xeon features and software enablement. Let's hear from Ericsson.
The 4th Gen Intel Xeon Scalable processor represents a big step forward in the industry's transition to the cloud-based paradigm. Its optimized performance and power efficiency, combined with the scalability and flexibility of Ericsson Cloud RAN, helps enable our high-capacity solutions for the world's most demanding RAN environments. We're looking forward to the launch of the 4th Gen Xeon processor with Intel vRAN Boost and see it deployed within our customers' networks.
We're not stopping at the network. We're also bringing this innovative new architecture to our workstation family of products. Launching in February, the Intel Xeon W processor families will power the next generation of high-performance workstations. Okay, let's talk about cloud. Cloud architecture is at the heart of the digital transformation, and we've seen that first wave of digital services change the way we interact with the world. Our customers are driving that next wave of innovation, and they're delivering it on top of a cloud-based microservices architecture.
At CoreWeave, we help our clients solve some of the world's largest, most difficult challenges in the high-performance compute space. We've chosen the 4th generation Intel Xeon Scalable processor to support our next generation of supercomputer instances in the cloud. This will further empower our clients to push the boundaries of AI, machine learning, and scientific discovery. Our specialized infrastructure is purpose-built for large scale AI and machine learning workloads. The single threaded performance of Sapphire Rapids paired with the introduction of the AMX accelerator will be critical to delivering best-in-class training and inference solutions IONOS is proud to introduce the fourth generation Intel Xeon Scalable processor into the ionos bare metal cloud portfolio that provides solutions to customers that want raw IT performance. Intel as a whole family is great to work with from R&D discussions over supply chain support to join customer activities.
phoenixNAP will deliver fourth gen Intel Xeon Scalable processors in easily consumable APN. Our bare metal cloud platform will enable organizations to automatically provision servers, powered by this new technology, and quickly showcase results. It will also provide easy access to instances with built-in AMX acceleration, solving the emerging challenges in AI and ML processing by speeding up data movement and compression. Together with Intel, we'll be offering better performance for real-life workloads while ensuring advanced security and flexibility.
At Dropbox, we want to address cost per unit performance while increasing the rack level density. Due to our key partnership with Intel, we had early access to Intel's fourth generation Xeon Sapphire Rapids and are very excited about it. Sapphire Rapids introduces a brand new microarchitecture for the data center, scaling up core count, enabling new performance capabilities that in turn offer an even better experience to Dropbox's customers and helps enable storage efficiencies.
OVHcloud has been collaborating with Intel for many years to develop a vertically integrated model that favors sustainability in the data center while offering a trusted cloud. We are excited for our customers and partners to experience full gen Intel Xeon Scalable processors in future solutions for our cloud instances, as well as our bare metal and storage offerings. With improved workloads performance, thanks in part to built-in accelerators, Intel's latest technology will help OVHcloud answer our clients' needs for optimizing their investments, bringing continuity, scalability, and improving the TCO.
Together, we are collectively delivering performance on a global scale. From AI workloads that are enabling a smart infrastructure for electric vehicles, high-performance computing that is advancing human health, to the 5G core workloads. Xeon is ready for all of this. I'm so excited for the future of technology and what we can deliver together. Speaking of excitement about the future of technology, I would like to welcome back to the stage someone who shares in that, our CEO, Pat Gelsinger.
The past years have demonstrated how technology is increasingly central to every aspect of human existence. Powered by silicon, everything is becoming digital, and it truly is this magic of technology. We as a company, we're committed to continue to push forward that innovation, that discovery and growth. I've called it the five technology superpowers. Everything's becoming a computer. Everything and everyone is becoming connected. With cloud and edge, we have ubiquitous infrastructure. AI is turning data into actionable insights. Sensing. We see, hear, know where we are in every aspect of our digital experience. These five superpowers individually are powerful, but they're accelerating, augmenting, and driving an increasing pace of innovation as they come together and amplify each other. As Sandra mentioned, 100 million Xeons. Wow. Right? You know, what an installed base, what an extraordinary platform.
Working alongside our customers and partners, we see that this 4th generation Xeon is building on that extraordinary foundation. With it and Max, you know, how it will enable a next generation of differentiated solutions and systems at scale. This is enabling us to tackle today's toughest computing problems while simultaneously building a foundation for tomorrow. You know, I often wonder to myself, you know, why did God have me go be part of a software company? You know, what was it that, you know, led me on this journey. The critical importance and learnings that I gained about how important software is, right. We build hardware to run software and application, to fuel the innovation of the software community.
As our CTO, Greg Lavender says, "Software is the soul of the machine." Well, let's hear about the soul that we're enabling, right, with our ISV partners. I started off today with our OEMs and our cloud partners, you know, and how they're leveraging this innovation. Fundamentally, it's how does our silicon enabling the software community to innovate and build on what we're doing. With that, let's hear a few remarks from the software community that in my eight-year vacation from Intel I came to know and love. Let's hear from them now.
Cloudera and Intel's joint solutions enable our customers to expand their hybrid cloud analytics, implement open data lakehouse with record-breaking performance, and roll out machine learning use cases across their organizations at a record pace. As an example, the Cloudera Data Platform running on Intel's hardware delivers higher performance for workloads like streaming analytics and IoT. We've also decreased the response times that are needed for machine learning algorithms used in fraud detection across our financial services customers.
With the launch of Intel's 4th generation Xeon processor, Red Hat now enables Intel QuickAssist Technology. This helps our communications service customers to accelerate offload, minimize latency, and seamlessly switch between public and private 5G networks. In addition, Red Hat will enable Intel Advanced Matrix Extensions, allowing our financial services customers to accelerate AI workloads such as fraud detection. This reduces costs and heightens security, providing peace of mind for everyday users like you and me.
SAP and Intel share a rich history of technology partnership going back more than 50 years. Intel is the undisputed leader in compute for SAP workloads across all deployment models, and we are both deeply committed to delivering unmatched value to customers.
I'm so pleased to have this opportunity to join you for the launch of Intel's 4th generation Xeon processors because it is truly a game-changing next gen platform. It also reinforces the value of the VMware Intel relationship as together, we continue to serve the needs of our broad range of customers. Multicloud environments are becoming richer and more diverse, providing many choices in the infrastructure stack for their workloads. The combination of your 4th gen Xeon processors and VMware vSphere provides a proven path for data center modernization across all industries.
Like you, I'm thrilled about today's launch. I'm excited to see all the innovation that this technology is enabling. As you've heard from so many, I mean, a plethora of companies and industry partners and customers of this technology. Sapphire Rapids is just the next step on that journey. The Xeon roadmap is making great strides and progress and hitting the key milestones, rebuilding that execution confidence that our customers can have in Intel's foundational technologies. We remain on track for our technology, our process technology. The bold vision we had for 5 nodes in 4 years, all of that is on track because since our founding, Intel fundamentally believes in this pursuit of semiconductor innovation. You know, it's described by Moore's Law, you know, that we will continue to innovate until Moore's Law is exhausted, right?
You know, everything in the periodic table's been used up in that pursuit because we believe deeply in the power of technology. Today, we've provided a technological update for the backbone, the foundation that Xeon provides, and how critical it is as this foundation for human innovation. We are still, as a company, right in the thick of it. The world builds on Intel technology. As we wrap up, let me bring Sandra and Lisa back to the stage. We are fully focused on enabling ongoing digital transformation.
delivering high quality innovative products to the market at a predictable pace.
To enable your success. Thank you.
Thank you.