Good morning. Good morning. How are you guys doing? It's so exciting to be here in San Francisco with so many press, analysts, and friends and partners in the audience. Welcome to everyone who's joining us online around the world. Today is an exciting day. We have a whole bunch of news to share with you today. Let's go ahead and get started. Now, you guys know us well. At AMD, we're all about pushing the envelope in high performance and adaptive computing to create solutions to the world's most important challenges. Whether you're talking about the cloud or enterprise or 5G or networking, AI, automotive, PCs, and much, much more, AMD technology is truly everywhere, touching the lives of billions of people every day. Now, this is an incredible time for AMD. We have so much innovation going on.
We're actually in the middle of introducing four new architectures and dozens of new products as we deliver an incredible pace of products and innovation. We started in September with our Ryzen 7000 desktop processors, which were our first CPUs powered by the Zen 4 core. Then just last week, some of you were actually in the audience, we launched the new RDNA 3 gaming GPU architecture with our Radeon RX 7900 series. We also have our Zen 4c core, which is optimized for cloud-native computing, and our next gen AMD XDNA architecture that Victor and his team are developing. They're both looking great and on track to launch in 2023. Today, we're here to talk about our 5 nm Genoa server processors. I've got to tell you, I'm incredibly excited with what we're gonna show you.
Now, you know we said the data center represents our largest growth opportunity and the number one strategic priority for our company. When you think about the modern data center, you know, customers need the highest performance compute engines actually across the board. In addition to our leadership EPYC processors, we also offer a very full portfolio, including our Instinct GPU accelerators built for our HPC and AI, our leadership FPGAs and adaptive SoCs through our acquisition of Xilinx, and our leadership DPUs from our acquisition of Pensando. If you look at all of this is really the broadest data center portfolio in the industry. As I said earlier, today is all about EPYC. We're laser-focused on building the world's best data center CPUs. We launched our 1st Gen EPYC in 2017, featuring our original Zen core.
Over the last five years, we've consistently delivered three generations of EPYC. Today, 3rd Gen EPYC is the highest performance and most efficient server CPU in the world. We're extremely proud of that. When you look, you know, in the cloud, EPYC has actually become the industry standard based on our performance, our compute density, and our total cost of ownership. Every major cloud provider has deployed EPYC in their internal workloads and in their external customer-facing workloads. If you look today, there are nearly 600 public EPYC instances available across everywhere. Now, if you look at high performance computing, we're also incredibly proud of the important scientific research that's conducted every day with EPYC.
Today, five of the top 10 most powerful supercomputers and eight of the top 10 most efficient supercomputers are using EPYC, including Frontier, which is the world's first exascale supercomputer, and it's both the fastest and the most efficient in the world. Now, in the enterprise, we also see very strong adoption of EPYC. This year, we increased our on-prem enterprise deployments by more than 50%, and we're significantly growing the number of EPYC-based solutions available across the ecosystem, and we're on track to double the number of solutions in 2023. Now, when CIOs are looking to modernize their data center, they are actually considering a number of different factors. First and foremost, compute is actually used to transform and drive the core business of the organization. All of us are doing the same.
When you're selecting compute, you actually wanna make the right selection, and it makes a big, big difference in terms of the OpEx and the CapEx budgets and what you can accomplish within a certain envelope. Of course, we're also thinking about sustainability and security as key things to consider. This is why choosing the right data center CPU is more important than ever. We thought about all of these factors when we developed 4th Gen EPYC. The key areas that we are focused on were to make the world's best server CPU roadmap even better. I'm very proud to say that 4th Gen EPYC delivers leadership on every single dimension. It's the highest performance, it's the most efficient, and we're delivering significantly better performance per watt than our competition.
What that means for enterprises and for cloud data centers, it translates into lower CapEx, lower OpEx, and lower total cost of ownership, including all of the performance that we talked about. Now, let me tell you just a little bit more about 4th G en EPYC. We're using our leadership 5 nm technology, and we have up to 96 5 nm Zen 4 cores. We're also using our next generation chiplet technology. We're using a combination of 5 nm and 6 nm technology. We've also added the latest I/O. That includes PCIe Gen 5, 12 channels of DDR5 memory. We support CXL memory expansion, and we've also doubled the number of confidential VMs. Drew, please. I am extremely excited to show you today for the first time 4th Gen EPYC. I love it too.
There's incredible technology here. What you can see 12 compute dies. These are the small dies in 5 nm technology. That's where we put all the computing, all the CPUs and all the cores. Then you see the I/O die in the center, which is built in 6 nm technology. What you have on this package is a total of 90 billion transistors. That's a lot of transistors. Now let's look at what we can do with all that performance. Starting first with the cloud. When you think about cloud applications, integer performance is key. Looking at SPECint_rate, which is the benchmark, you know, looking at it, we would say today, dual-socket 3rd Gen EPYC is 40% faster than the top of the stack of the competition.
With 4th Gen EPYC, we actually extend that lead, and we now deliver nearly 3 x better performance than the competition. What that means is the core density advantage of 4th Gen EPYC allow our cloud service providers to support more than double the number of instances per server. What that means for our customers, end customers, is that each of them get to experience these instances with much higher performance. We're very proud of the deep partnerships across all of the cloud service providers. Let's hear next from one of our closest partners, Microsoft, to tell you more about our partnership with Microsoft Azure. With that, Scott Guthrie, Executive Vice President, Cloud + AI.
Thank you, Lisa. AMD and Microsoft share a commitment to helping customers accelerate their digital transformation. No area better exemplifies the impact of our partnership than high performance computing, a critical capability in every industry. Startups like Meteomatics are leveraging HB -series virtual machines powered by EPYC processors with 3D V-Cache for mission critical simulations to help their customers manage risk from extreme weather. Microsoft's own silicon design teams leverage these VMs to get an 80% speed up for their EDA backend workloads compared to their prior solution. This enables us to bring the next generation of hardware technology on a more sustainable basis. In these times of historic change and uncertainty, we recognize the cloud plays a key role in helping our customers do more with less and overcome the dynamic challenges they face.
Today we're announcing two new HPC virtual machines that will be powered by 4th Gen EPYC processors. Each delivers our highest levels of performance, power efficiency, and cost effectiveness yet. First, we're introducing the all-new HX series. Featuring 1.5 TB of ultra-low latency memory, these VMs are purpose-built to help silicon design customers save money on EDA operations and deliver products to market faster. We're also introducing 4th-g en Azure HB series virtual machines that deliver up to 2.5x the performance of 3rd Generation HB series and up to 6x the performance of HPC servers that customers run in their on-premises environments. Both HX and 4th -gen HB series feature 400 Gb InfiniBand, our next generation networking, to bring super computing scale to every Azure customer.
Customers and partners can sign up today to access HX and 4th Generation HB series VMs in preview with Genoa processors that will automatically be upgraded to Genoa-X when they go into general availability in the first half of 2023. This is just the start as we look forward to launching additional Azure VMs, including confidential computing capabilities on both virtual machines and containers on 4th Gen EPYC processors in the future. Congratulations to the AMD team and thank you for your partnership and support of the Microsoft Cloud.
Thank you, Scott. We're so excited to see the new HX-series and HB-series VMs powered by 4th Gen EPYC in public preview today, and we're really looking forward to the next-gen Azure confidential computing VMs and containers coming soon. Now, for cloud data centers, efficiency has become equally important as pure performance. This is an area where our EPYC architecture really shines. If you look at SPECint energy base, which measures the power efficiency of integer performance, systems with a higher score deliver more performance per watt. You can see our 4th Gen EPYC delivers 2.6x the energy efficiency of the competition. What this means is that choosing AMD EPYC CPUs in the cloud translates into lower energy usage and lower energy costs.
A number of our cloud providers are actually telling us that power is becoming a significant limiter in what they can install in their data centers. We believe that this is really differentiating for us. Enterprises around the world are going through massive business transformation, and that's also leading to tremendous growth in the cloud. Oracle is a leader in this area, and with their leading database services optimized for enterprises and running on Oracle Cloud Infrastructure, we're extremely excited about the partnership with Oracle. Today, we're very honored to have Clay Magouyrk, Executive Vice President at OCI, join us. Clay? It's great to have you here with us, Clay, and thank you so much for the partnership between AMD and Oracle. It's been. You know, you've been supporting us from the very beginning. Tell us a little bit about our partnership.
Well, first, thanks for inviting me. It's always good to be here and see all the happy people. You know, we've had a long road together. I remember it feels like it was just yesterday when we were talking, but when we launched Naples and I realize now I'm five years or six years older, we'll just go with five. But the pace of innovation that I think you and the team have been able to achieve is really incredible, right? From Naples and Rome, Milan now through to Genoa. This many platforms in a short period of time has been incredible. Customers really get the benefit, right?
What's also really nice about working at Oracle is that obviously as someone who works in the cloud, we have a big cloud business, and that's growing. We're also always very focused on how do we have the highest performing system. With things like our Exadata that we run, like our Autonomous Database on the most recent version of our Exadata systems also run on EPYC processors because it offers, you know, the best and highest performance database system out there. What we've been doing at Oracle and we see our customers doing is choosing the right processor for the job, and many times that's EPYC, and the results speak for themselves.
Well, we are so honored to be in your data center, and it's, you know, wonderful to see all the momentum that we've built together. Now, you know, today is about 4th Gen EPYC in Genoa, so tell us a little bit about what you're doing with Genoa.
Well, there's a lot going on, right? I think there's the standard stuff. You know, we're launching our general-purpose Genoa-based compute, which offers a great price performance across the board. We also have a very high-performance computing optimized system that we're also launching that offers, you know, RDMA networking capability and bare metal offering for the highest peak performance available. We're launching that on the Genoa platform as well. There's more than that, right? I'm very excited personally about the work that we're doing with yourselves and Samsung across CXL, and I think there's a lot that Genoa is obviously an important CPU. But it brings with it is things like DDR5.
It brings with it CXL, which I think is gonna be very impactful in the data center, especially as a cloud provider. We can make use of that really cool technology to, I think, get better performance and lower costs. The same thing around confidential computing. You know, the secure extensions that you and your team have implemented are critical for us to be able to offer best in class confidential computing options, and that's what we're doing with Genoa as well. Last but not least, as you mentioned, power is extremely critical, right? Energy rates are going up. Everyone's concerned about the impact that we have on the environment.
You know, at OCI, we have more than 10 regions globally that are already 100% renewable energy, and we have a commitment to make all of our regions 100% renewable energy. What I love about the work that you've been doing with the team on Genoa is performance is better, power usage is down. It's amazing how you get both of these things together. I think, as primarily a software person, I'm always thankful for the amazing advancements of the hardware industry that we can then waste on more programming languages.
Well, you've made my engineers very happy hearing that. Now, Clay, you know, we're all about technology, but I know that Oracle and AMD are also all about customers. Can you tell us a little bit about how, you know, customers are, you know, seeing the technology?
Yeah. Well, you know, if I go back to where we were five or six years ago, a lot of the conversations I was having with customers was kind of around why AMD and when would I choose AMD? I think that conversation has really shifted to where now the conversation starts with why not AMD? As a prime example of that is the work that we've been doing with Red Bull Powertrains around the design of their new F1 engine. They're building their own power plant for the new set of regulations that comes out in a few years. They're doing that work on top of OCI, and they're doing that work on top of OCI using Genoa's new high performance computing option. We're seeing that across the board, right?
Whether its peak performance workloads, general purpose workloads, people are choosing, you know, AMD in general. Everyone's been very excited about the Genoa specifically.
Fantastic. Well, thank you so much, Clay. We so appreciate the partnership, and we're looking forward to doing great things together in the future as well.
Yeah. Thanks for having me, Lisa. I really appreciate it. Thank you.
Thank you. Now let me talk a little bit about high performance computing. High performance computing is, you know, very, very important for complex modeling and scientific research. Here floating-point performance is really critical. When you look at 3rd Gen EPYC, we're already the world's fastest CPU in supercomputing. When you go to 4th Gen EPYC, that extends our leadership to deliver 2.5x more performance than the competition. What this means is that scientists can build more accurate models that can help solve the world's biggest scientific challenges, whether you're talking about climate change or renewable energy development or any type of medical research. All of that is helped by more computing capability.
If you look at the enterprise, there are also a diverse set of workloads there, whether you're talking about databases or big data analytics or virtualization or hyper-converged infrastructure. Java is one of the most prevalent environments in the enterprise. If you take a look at Java performance running in a typical e-commerce environment, again, 3rd Gen EPYC already delivers 50% more performance than the competition. We see an even larger performance lead with 4th Gen EPYC with going to 2.9x more performance than the competition. What this allows is enterprises to process data faster, to speed up analytics, to speed up decision-making, as well as transaction processing. When you see all of that, there's so much advantage on the performance side.
As I said, just like our cloud partners, our enterprise partners are also looking for energy efficiency, and that's become much, much more important. Let me take you through an example of the energy efficiency discussion. A typical enterprise may have 15 Xeon servers in a rack. Last year, the average energy cost was $0.08 per kWh. To power that rack, it might have cost, you know, $8,400. Now, if you use 3rd Gen EPYC, you could do that same work in 10 servers. That's less CapEx, but dramatically less power consumption as well. The energy cost in that case would have been 32% lower than our competition. We all know that energy prices around the world have gone up.
In certain parts of the world, energy costs have increased more than 5x in the last year. Those same 15 servers now cost maybe $50,000 a year to power. With 4th Gen EPYC, an enterprise can achieve that same performance in only five servers. That's dramatically less CapEx, but that's dramatically less OpEx as well, and that leads to a 54% savings in energy costs. If you scale this example to an enterprise running thousands of servers, you're now seeing that you can save millions of dollars by upgrading to 4th Gen EPYC. Another partner who has been with EPYC since the very beginning is HPE. To tell you a little bit more about how 4th Gen EPYC fits into the HPE portfolio, let's welcome HPE President and CEO Antonio Neri.
Hello, Lisa. I am pleased to be able to participate in this important industry milestone. The AMD theme, together with advanced data centers, could not be more fitting, as HPE and AMD have been strong partners innovating together for more than 18 years. Just this year, HPE and AMD ushered in the exascale era with the Oak Ridge National Laboratory and the debut of Frontier, the world's first supercomputer to break the exascale performance barrier while also being the greenest supercomputer on the planet. I want to thank Lisa and the AMD team for a tremendous partnership. With your launch today, I'm excited to share that HPE is adding six new platforms featuring Genoa. As a market leader in hybrid cloud, we are committed to delivering greater choices and simplicity through the HPE GreenLake Cloud platform, which supports most of the HPE's innovation using the 4th Gen AMD EPYC processor.
Customers can easily adopt cloud services with advanced performance while gaining critical capabilities in governance, security, and visibility. Our new products include new HPE ProLiant servers that offer enterprise customers a secure and simplified cloud management experience with workload-optimized performance to target a range of applications across edge, AI, analytics, and cloud native. We're also supporting the 4th Gen AMD EPYC processor in our recently expanded supercomputing and AI portfolio to help broader adoption. Our new HPE Cray EX and XD supercomputers deliver some of the same technologies found in the Frontier supercomputer, but in a smaller footprint to advance product design and accelerate innovation for organizations of all sizes. Congratulations, Lisa and AMD and HPE teams who are making this possible. Together, we can't wait to help our customers accelerate what is next.
I wanna thank Antonio for that great partnership with EPYC, and we are extremely excited with the broad deployment of 4th Gen EPYC across HPE's entire portfolio, including six new ProLiant, GreenLake, and the Cray EX and XD platforms. As you can tell, we're incredibly excited about EPYC and all of the work that we've been doing together with our partners. Now let me turn it over to Mark Papermaster to tell you a little bit more about the technology and the architecture. Mark?
Thank you. It's such a proud day for us at AMD to share with you the technology that the team has come together to deliver for our customers. When we launched the design of our Zen CPU processor, we set a goal to return AMD to high performance and sustain high performance CPU delivery. Today, I'm really excited to share with you how the Zen 4 CPU architecture is delivering for 4th Gen EPYC servers. You know, AMD really disrupted the industry when we debuted Zen in 2017. It was really a you know, entirely new CPU architecture for us, and it delivered such a significant IPC performance, and energy gains, energy efficiency gains. Then with 7 nm, Zen 2 brought an incredible compute efficiency. With its ground-up new microarchitecture, Zen 3 delivered impressive high performance leadership.
We didn't slow down with the Zen 4 CPU and 5 nm and our I/O die in 6 nm, and I'll share with you some of the details shortly. We also added Zen 4c, a new variant. It's compact in density, and this addition to our cores roadmap delivers the identical functionality of Zen 4 at half of the core area. This will be coming out in the first half of 2023 with our server lineup. This core is workload optimized for applications that don't need to run at the highest frequency and can deliver even more computational efficiency. Also, in 2023, there'll be a version of Zen 4 with a vertically stacked V-cache for memory-hungry applications. It's full speed ahead for AMD on our CPU innovations. A bit on Zen 4.
Like its predecessors, Zen 4 continues the AMD tradition of delivering each new CPU generation with performance and efficiency gains. AMD is a trusted supplier to our customers, and thanks to our on-schedule high-performance delivery, we deliver again with Zen 4. Let me walk you through the design and the technology node improvements that we made in this Zen 4 release. There's several architectural and technology improvements compared to Zen 3. We added a new front-end design to better feed our execution pipeline. We also improved our branch prediction to further streamline the instruction pipeline flow. We improved our cache hierarchy with a fast private 1 MB L2, doubling its size from previous generation, and a large shared L3 cache for overall higher performance. Also, you know, AMD has long supported the advanced vector extensions in x86 for machine learning and high computation workloads.
With Zen 4, we expanded our instruction support to include AVX-512. Our implementation uses a double pumped approach on a 256-bit data path. What this is designed to do is prevent any frequency fluctuations while you're running AVX-512 workloads. It's really, you know, delivering all of that benefit of AVX-512. This capability is targeted to heavy lifting HPC applications like molecular simulation, ray tracing, physics simulations, and more. Let's take a closer look at the process technology underneath. With Zen 4, we deliver a leadership efficiency with both design and process optimizations. We leverage TSMC 5 nm for the CPU die. It's the most advanced cutting-edge process technology that contributes, along with AMD design efficiencies, to bring significant performance and energy efficiency gains. Our decades-long relationship with our EDA partners and with TSMC continued with 5 nm.
We operate in a very deep design technology co-optimization partnership that synergistically makes sure that the design and process changes work harmoniously together to deliver performance and efficiency gains that are also highly manufacturable. In Zen 4, the result is a specialized HPC process, and that's enabled us to get additional frequency beyond the baseline process technology. The teams co-optimized device scaling, device capacitance, the metal stack, and these all made significant contributions to Zen 4 in addition to the functional design levers that the AMD team pulled. All of this resulted in further computational efficiency. The logic and cache area scaling in 5 nm allowed us to further reduce the die area over Zen 3 by 18%, despite adding new performance features and new instruction support.
A 15-layer telescoping metal stack has been co-optimized to deliver both high frequency and high density routing capability. Now let's take a moment and look at the latest chiplet innovation in Zen 4. Customers can choose to deploy from a two CPU core die CCD with 16 cores, up to 12 CCDs with 96 cores per socket. That's modular scalability, and it's that modular capability that allows us to deliver tremendous configuration flexibility. Up to four 32 Gbps socket-to-socket Infinity Fabric links enable a greater than 1.9x cross-socket bandwidth, and that's improvement generationally over our 3rd Gen EPYC. The 4th Gen EPYC CPUs with Zen 4 also deliver leadership silicon and platform scalability with significantly higher performance per core, maintaining a balanced performance for the most demanding workloads.
When AMD first pioneered the use of chiplets in 2015, it was clear that we could extend Moore's law, reduce costs, improve our energy efficiency, and still deliver high performance generationally to our desktop and server CPUs. We have more than 40 chiplet standard products in manufacturing today. We've been an absolute pioneer here, and the industry at large recognizes the benefit of chiplets and is moving in the direction that AMD has set. Chiplet designs are not just for CPUs. Last week, we debuted our first chiplet-based design for GPUs based on the RDNA 3 graphics architecture, delivering an improved experience, performance, and power efficiency for gamers. Moving on to the key metric for CPU performance. That is always how efficiently the processor can get the job done at each clock tick.
IPC, instructions per clock, and so proud that in our server application, Zen 4 brings a 14% instruction per clock improvement. It's based on a geometric mean of 33 server workloads. Zen 4 is a derivative of Zen 3, so it's targeted at enhancing that base architecture from its predecessor. In the Zen 3, we increased the execution list. Zen 4, that meant that we need to work on feeding the instructions faster into the machine. That's why you see most of the improvements coming from the front end and branch predictions, 'cause that increases the capability of the more instructions delivered per cycle. We grew the op cache. I mean, that increased the hit rate and delivers more ops per cycle.
We have a load store improvement and the larger L2, they're providing critical data faster and leading to the overall contribution of a 14% IPC improvement. When you look at the cumulative IPC improvement in the Zen era, it represents a 235% gain over four generations. That's over the last five years. We have clearly not let up. Let me assure you that we have not used up all the tricks in the book for CPU performance, and we have a lot more innovations coming for our future generations. Our focus on core performance is absolutely relentless. Of equal importance is our commitment to high performance with energy efficiency in every generation. When we announced the first generation of Zen, I showed you that we are advantaged in both area and power versus our competitor's x86 CPU.
We've grown that advantage. This is a choice that we made. It was a choice to optimize both high performance and power efficiency. Rather than just throwing power and area at the problem, AMD strives to achieve a balance of high frequency design that's power efficient with both our methodology and our technology deployment. The result is a leadership CPU with significantly better performance per watt in a much smaller area. Utilizing our leading process edge technology and the world-class physical design approach, the AMD Zen 4 design is 40% more area efficient than Intel's current server processor. As we showed during the Zen 4 Ryzen 7000 series launch in August, Zen 4 is about 50% smaller than the Intel Alder Lake Golden Cove for desktops.
Now, what we expect is that the Zen 4 area efficiency leadership will in fact grow even more versus the Intel server Golden Cove core once it's released. Zen 4 in EPYC sockets is up to 48% more energy efficient than the competition, and that is a key lever for CIOs who are looking to substantially improve their sustainability with their deployments. With the 4th Gen EPYC, you're reducing the environmental impact, all while improving your total cost of ownership. It's really a huge win-win for our customers. The impact of Zen 4 in the 4th Gen EPYC server CPUs is fully evident in the performance gains at the same TDP as shown on these graphs.
The leadership in 5 nm technology is part of the reason that we can deliver such huge gains, but it's amplified by the IPC gains that are accomplished without compromising on power. In the SPECrate2017 integer benchmark, you see a 1.41x improvement generationally in the performance per watt of energy expended. In the floating point benchmark, you see 1.7x improvement. As part of adding support for AVX-512 implementation, we also added extensions for convolutional neural nets with the vector neural net instructions, VNNI. 4th Gen EPYC server CPUs with AVX-512 VNNI support a 2.67x uplift for BERT Large. That's our natural language processing benchmark, and we used a comparison at 64-core gen- over- gen. It's just a huge speed up for CPU inferencing.
Now, let's take a moment and hear from one of our key customers. We have a short video featuring Amin Vahdat from Google Cloud, who will elaborate on their use of the AMD EPYC CPUs in the Tau VM family for data security and to reduce the company's energy footprint. Let's roll the video.
Thank you, Mark. We are very excited to be a part of the launch of AMD's 4th Gen EPYC processors. At Google Cloud, we're committed to providing infrastructure choices that are optimized for real-world workloads so our customers can get the best experience in price performance. In 2021, we introduced 3rd Gen AMD EPYC processors in our Google Cloud Tau VM family. Tau VMs deliver a leading combination of price and performance, and we've been thrilled to see these VMs change what is possible for a variety of scale-out workloads for many of our customers. Since then, we have expanded the availability of AMD EPYC processors to our compute optimized VM family, C2D. Of course, we will be introducing VMs based on the new 4th Gen EPYC processors.
At Google, we are proud to have matched all of our energy consumption with renewable energy for five years in a row, making us the cleanest cloud in the industry. We have set the ambitious goal to operate all our data centers on carbon-free energy 24/7 by 2030. Our growing footprint of AMD EPYC processors is helping us with our efficiency goals, and we are excited by the energy consumption improvements of AMD's 4th Gen EPYC processors. Google Cloud is strongly committed to the protection of our customers' data, and encryption is a powerful mechanism to help us achieve this. For years, Google has provided encryption both in transit and at rest to protect our customers' data. To complete the full data protection life cycle, our confidential computing portfolio, based on AMD EPYC processors, also protects data in use.
Google Cloud security team and Google Project Zero work closely with AMD's product teams to advance our security capabilities with confidential computing. We value our close partnership with AMD and look forward to collaborating on new Google Cloud offerings to deliver increased performance, greater efficiency, and functionality for our users. Thank you.
Thank you, Amin. As you just heard, EPYC processors have been instrumental for Google in reaching their efficiency goals and in helping protect data in use, at rest, and in transit through confidential computing features. They're part of the AMD Infinity Guard, and I'll cover the new Infinity Guard features in just a few minutes. Let's take a look at I/O memory and then security improvements. 4th Gen EPYC delivers leadership I/O bandwidth with 2x more versus our previous generation, with PCIe Gen 5 versus our prior PCIe Gen 4 implementations. Further, PCIe Gen 5 supports memory expansion capabilities with Compute Express Link, CXL. We have CXL 1.1+ support, and it's plus because it supports CXL 2.0 memory devices for disaggregation and now enabling a tremendous flexibility for customer configurations.
DDR5 is also very, very key because that ensures that we have the memory bandwidth to keep up with all the added cores that we have in 4th Gen EPYC. With up to 2.3x memory bandwidth improvement, we keep a balanced compute that scales. We're also keeping pace on memory security, which relies on high performance AES-XTS encryption/decryption capability. AES is the de facto cryptographic algorithm, and it's essential element for securing application data in memory. With 4th Gen EPYC, AMD delivers an even stronger 256-bit AES-XTS encryption. Zen 4 also added APIC programmable interrupt to really accelerate our VMs and confidential VMs, speeding their implementation. Okay. Well, AMD Infinity Guard, it's a multifaceted approach that we have to data center security and delivers an industry-leading set of modern security technologies that really decreases significantly the potential attack surfaces on our processor.
With each generation of Zen, we've been delivering more functionality and more leading-edge security than the generation before. With Zen 4, we've enabled new features, including a doubling of the confidential computing VM guest that can serve even more customers than before. We've also further hardened protection from side-channel attacks while running simultaneous multithreading in secure nested page mode. AMD continues to deliver the Zen CPU roadmap on schedule and to our performance goals. We remain focused on being that trusted partner to our customers and delivering our next generation of designs which are well underway. We've doubled down on our implementation efficiency, getting to a finer-grained level of power optimization than ever before. With the 5 nm process technology and design optimization, AMD delivers compelling performance at low TDP.
It's incentivizing our customers to take advantage of our efficient, secure, cool, and quiet systems with leading-edge performance at a fraction of the competitor's power consumption. AMD thrives on innovation, and you can see that in each generation of the Zen roadmap. Zen 4 is simply the latest testament of AMD execution excellence. Thank you. I'm very, very happy to bring up Dan McNamara to the stage. Dan.
Good morning. Hey. Thank you, Lisa. Good morning, everyone. Back in June, at our Financial Analyst Day, I talked about the fact that the EPYC business is on fire. Today's the next step in our journey. I have to admit right at the start that Genoa is actually the coolest product I've ever been associated with. You just heard a bit about the performance and architecture of 4th Gen EPYC. Now I'm gonna take a little bit of a shift in gears and talk about the tangible business benefits that the enterprises can tap into by leveraging 4th Gen EPYC. IT organizations worldwide are grappling with the need to process massive amounts of data, accelerate business results, and do both of those with a shrinking power, cost, and space envelope. We designed 4th Gen EPYC for those key needs.
Through countless conversations with our customers, we believe that there's three main pillars that everyone's optimizing for as part of their modernization needs, and the first one is to harness all that data I just talked about and deliver to the business real-time actionable insights. To do that, you need the highest bandwidth, lowest latency, and highest throughput systems in your fleet. Secondly, CapEx and OpEx budgets are under strain across the industry. I'll show you how with 4th Gen EPYC, we're gonna enable customers to consolidate their infrastructure and drive savings. Lastly, every person and every enterprise is tasked with reducing energy usage and reducing carbon footprint. I'll show you how with 4th Gen EPYC, we're not only driving efficiency and sustainability in the data center, but we're driving a dramatic impact on the environment as well.
Let's take a look at the key ingredients within 4th Gen EPYC that drive this value. With EPYC, it all starts with core counts, and you already heard today our top of stack is 96 cores. That's a full 32 cores more than our market-leading Milan processor and more than 2x our competitor's current top of stack processor. It's not just about core counts. Most of the enterprise workloads desire a much stronger core. We did a little bit of an analysis here, and we took our 4th Gen EPYC 32-core and compared it against our competitor's 32-core Ice Lake. As you can see, we're delivering up to 55% higher per-core performance.
This density and performance is what has our customers and partners, some of who you've already heard from today, so excited to launch new platforms and services based on 4th Gen EPYC. This product stack is how we deliver all that performance. We designed it to optimize for different workloads and segments in the market, starting with cloud and high performance computing. We're delivering a number of SKUs from 96-48 cores, optimizing for density and throughput. Secondly, in the core of the enterprise, we're delivering multiple SKUs with high per-core performance in addition to our 96-core top of stack that delivers single socket performance leadership, which is critically important as we go forward. Lastly, for mainstream enterprise, we're delivering solutions from 32-core to 16 cores, optimizing for power and cost.
Across the stack, you can expect the same memory, security, and IO that Mark just talked about, regardless of the choice. It's a reduced stack and Dell, but we're delivering a much broader core count breadth. I have a question. What happens when you get per-core performance and the highest density? You break a few records. We've broken records across a number of workloads, from database management to technical and engineering workloads, to infrastructure and business applications. I'm very excited to say that we have broken more than a few world records, and we're at over 300 and counting today. Now let's take a look at what this all means to the enterprise. Enterprise rely on relational database management systems to help with decision-making and analytics.
In the enterprise today, your analytical capability is one of your key differentiators. As you look at this, the picture on the left, we did a test with our top of stack 4th Gen EPYC versus Intel's 8380. As you can see, we're delivering more than 2.5x the performance for business-critical queries. Transaction processing is critically important for the enterprise as well. Whether you're in e-commerce, financial services or other segments, more transactions per second means more customer touches and more sales. Similarly on the right, we did a test with our top of stack versus Intel 8380. As you can see, again, we're delivering more than 2x the performance in critical OLTP transactions.
These are just two examples of how an enterprise can leverage 4th Gen EPYC and move and process their data more than 2x faster and drive more business agility. Now, the second pillar I talked about was infrastructure consolidation, and I wanna take a look at that through the lens of enterprise virtualization, which is obviously a very hot topic today. VMware is the leader in enterprise virtualization software. VMware is a leader in virtualization software. We have a long-standing relationship with them, and in fact, it gets stronger with every single new generation we deliver. To hear more about what VMware is doing with 4th Gen EPYC, let me introduce their CEO, Raghu Raghuram.
As we are all aware, enterprise customers today face a wide range of business challenges that drive them to modernize and transform their data centers. As they do so, they demand all of the benefits of the cloud for their on-prem environments. They also need solutions that are easy to implement so they can optimize their investments. At the intersection of this enterprise demand and VMware innovation lies the latest generation of our enterprise workload platform, VMware vSphere 8. We recently launched vSphere 8 at VMware Explore U.S., and it is now available for download. Today, VMware is proud to announce that vSphere 8 is available and optimized for the 4th Gen AMD EPYC processors. This combination of vSphere and EPYC allows enterprises to realize hyper-optimized performance with the lowest TCO and energy consumption, all in pursuit of data center modernization for our customers.
In addition to delivering the best virtualization performance, VMware and AMD continue our commitment to provide exceptional security solutions. When running on EPYC processors, VMware vSphere delivers true confidential computing for on-prem data centers, and it requires zero customer application work whether you use virtual machines or containers. The combination of AMD EPYC processors and VMware vSphere provides a proven path for data center modernization across industries. As an added bonus, we are contributing dramatically to lower power consumption in the data center. Together with AMD, we are redefining the green data center with increased machine consolidation and power reduction. Thank you for inviting me to share this important news with you and your audience.
Okay. Thank you, Raghu, and to the entire VMware team. We're very, very excited about delivering the joint benefits to our customers with vSphere based on 4th Gen EPYC. Now, as Raghu mentioned, virtualization performance is a key factor that CIOs look at as they're modernizing their infrastructure. VMmark is an industry standard benchmark that measured virtualization on servers. As you can see here in the VMmark test we've done, we're delivering almost 3x the performance versus our competitor, which is just tremendous advantage. We're also delivering over 3x the density, which allows more VMs to be deployed. This performance and density is critical, no question.
We all know that the data centers are of today are also challenged in space and power and packing in the most amount of compute in the smallest space and power is super critical as well. Taking these factors into account, we wanted to take a look at how 4th Gen EPYC can enable enterprise virtualization. We took an example of a roughly 2,000 VM deployment. As you can see here, it would take about 15 servers based on Intel Ice Lake processors to deliver that workload. On the right, you can see that we could deliver the same exact workload and density with five servers based on 4th Gen EPYC.
Now think about this, the EPYC 9004 delivers the same exact output, but with 1/3 the servers delivering that workload at 50% less power, and that combines to deliver a 40% CapEx and a 61% OpEx reduction annually. It's tremendous savings. Here's the best part. It's never been easier to migrate your VMs from Intel to AMD, and Forrest will show you that in a little bit. Now, I'd like to take a step back and look at 4th Gen EPYC at a higher level, little larger scale, and take a bigger picture look at the server energy efficiency. We looked at this, use an example of about one million servers deployed in on-prem data centers in the U.S. today. Imagine for a moment, and this is hard for me.
Imagine for a moment if all those one million servers were populated with top-of-stack Ice Lake processors. Very hard, but bear with me. To generate that same output, it would take 318,000 servers based on 4th Gen EPYC. Now think of that. 680,000 servers less with 4th Gen EPYC. No question. Huge savings, right? Cost, acquisition costs, operating costs, just a tremendous win for the enterprise. It gets actually more interesting. Those same 680,000 fewer servers actually will save over 4 billion kWh of energy consumption. By the way, that's more than the energy consumption of over 400,000 U.S. households in a year. Secondly, these same 680,000 fewer servers, they'll reduce CO₂ emissions by 2.2 million tons. 2.2 million tons.
Here's the other interesting part, and put that in context. It would take 2.4 million acres of U.S. forests to remove that much CO₂ from our atmosphere in a year. 2.4 million acres happens to be roughly the size of Yellowstone National Park. We are talking about tremendous game-changing results, not only for the data center, obviously, but for our environment also. Now, let me close with a couple last thoughts. First and foremost, it's never been more important to be maniacally focused on the efficiency of server fleets. No question. Secondly, it's never been easier for enterprises to adopt 4th Gen EPYC and deliver higher performance, consolidate their infrastructure for overall cost savings, drive less energy, and take a huge leap towards their sustainability goals.
I wanna thank you for your time, and now let me introduce Forrest Norrod to talk about our ecosystem of partners and also where you can get your hands on one of these EPYC servers. Thank you.
Thank you, Dan, and good morning, everybody. Before we get into where you can lay your hands on those sweet 4th Gen EPYC servers, I wanna show you why there's never been a better time to embrace EPYC. Dan just showed you, quite candidly, how much of a risk it is for enterprises and server customers to not switch to EPYC. From performance to consolidation and energy savings, AMD is the clear choice. Now, when we look across data centers today, you know, there is an enormous deployment still of older servers that were from the Skylake or Naples server CPU generations or before. That's well over five years old in many cases.
Moving from those aging servers, from that aging infrastructure, has many significant benefits because, quite candidly, just one AMD EPYC server can replace over five legacy servers, and the results of that consolidation are just incredible. 58% or more savings in energy costs per year. 80% reduction in the data center footprint while delivering more performance. That consolidation is easy to do. With AMD EPYC, you can move your workloads, your applications very easily from those legacy servers onto that new modern infrastructure. In less than an hour, you can move 380 virtual machines from those five legacy servers using standard tools to a single 4th Gen EPYC server. The energy and space savings, performance gains, and that simple migration, that easy transition, makes AMD EPYC the clear choice for the modern data center.
Now I'd like to talk to you and introduce you to some of the partners that have those EPYC servers that you can embrace. First off, I'd like to introduce Arthur Lewis, the President of the Dell Global Infrastructure Solutions Group. Arthur, it's so great to have you here to be part of this launch event. Maybe you can start, though, with telling the audience a little bit about how Dell ISG delights your customers.
Forrest, thank you for having me, and congratulations on a great launch. Just talking a little bit strategically, data is the currency of today, and it's the currency of the future. Modern multi-cloud infrastructure is the manner by which customers will access and unlock the full value of their data. The breadth of the Dell Technologies portfolio and our deep understanding of the data landscape, given that we're the largest provider in the world of data storage systems, housing 70% of the world's mission-critical data, allows us the unique opportunity to work with customers from edge to cloud to digitally transform the way they do business by providing simple, secure, efficient, and intelligent access to their data. Access to data is only one component. Customers also need the ability to process that data efficiently, securely and sustainably.
With the launch of the next generation PowerEdge portfolio, our largest refresh in company history, we hit the mark, driving industry-leading innovation in areas of artificial intelligence, security and sustainability. AMD, as you've seen today, with extremely strong innovation in performance and design, remains a compelling portion of our portfolio, and I fully expect to see, and in fact, Forrest, already am seeing, strong customer demand based on AMD's 4th Gen EPYC .
That's fantastic, Arthur. Now, you know, this is not a new thing, though. You've been designing and deploying some great AMD-based solutions for the last three generations as well. I think, you know, you've learned a lot about customer needs and how best to tune those solutions for performance, security, efficiency. Can you tell us a little bit more about that?
Yeah. Forrest, we are incredibly excited, and I think customers even more excited. We've designed incredible one-socket and two-socket solutions that support up to 50% more core processors, which yields both the highest level of absolute performance that we've seen, but also the highest level of performance per watt efficiencies that we've seen to date. Let me put some numbers behind that statement. Customers will expect to see 121% improvement generation over generation. Customers will expect to see 55% improvement in performance per watt. Customers can expect to see up to 25% savings in terms of kWh expended as a result of that efficiency. Customers will be able to see 60% increase in storage capacity, allowing them to consolidate and shrink their data footprint.
That's fantastic. Those are great performance and capacity features. I understand, speaking of performance, you've got some world records to talk about as well.
Yeah. Forrest, you know, we've had the pleasure of working together and setting many world records over the lifetime of EPYC. I think there are four that I would, you know, like to call out today with respect to the general launch. On the one-socket platform, we set the world record for SAP sales and distribution, overcoming and surpassing all of the other submissions by more than 2x. On the two-socket platform and continuing along with SAP sales and distribution, we set the world record by enabling 148,000 users. That's an impressive world record because not only did we surpass all of the two-socket submissions, we also surpassed all of the four-socket submissions as well.
Staying on the two-socket platform and largely enabled by the 192 Zen cores of 4th Gen EPYC, we also set the world record for Java performance, and you heard Lisa talk about that earlier today. Then lastly, but I think more importantly, is the TPCx-AI benchmark that we set. Now this is an end-to-end benchmark that covers training and inferencing, and we not only set the world record, but we smashed the previous world record by more than 3x .
Oh, man, that is great to hear. That is great to hear. You know, performance is certainly central to anyone considering a new server. But as well as we've talked about, you know, power efficiency is also critically important. Can you tell us a little bit about, you know, your innovations and particularly using AMD 4th Gen EPYC, the innovations in power efficiency?
Yeah. For us, it's no secret that sustainability is very top of mind for customers, and Dell obviously shares that priority. You know, in addition to very significant advancements that we've made in logistics and packaging, we have very strong circular economy and climate actions. We believe the latest iteration of the AMD-based servers present one of the most compelling arguments for sustainable compute in the industry. Again, let's talk, you know, some of the numbers as to, you know, why I feel comfortable saying that.
One is the consolidation story that you guys have talked a lot about this morning from the performance and efficiency increases that we've seen, where you can consolidate gen -over -gen, the workloads that were on two servers onto one servers, and when you're looking at generation minus one to current generation, consolidating from five servers to one. It's an incredible efficiency that customers can realize. Customers will also be able to leverage Dell Smart Cooling technology to drive further efficiencies to manage their carbon footprint CO₂ emissions, excuse me. Customers will also be able to leverage OpenManage Enterprise Power Manager to track their performance versus output so they can track and optimize their carbon footprint. Better performance, better efficiencies, innovations in cooling, the ability to track and optimize your carbon footprint.
Giving customers the ability to consolidate workloads, scale performance while driving significant efficiency savings. What's really cool about that for us is customers now have the ability to work on things that were at odds historically and were now coming into alignment where you can actually scale performance, reduce, and drive energy efficiency.
It's fantastic. It's a fantastic combination. Now, of course, we're talking about data centers, so underpinning everything always has to be security. I think we've got some great secure technology, and you've really built quite a bit of security technology into PowerEdge as well. Tell the audience about that.
Yeah, absolutely. The PowerEdge portfolio is built on the industry-leading cyber resilient architecture with features like secure component verification, trip detection, system lockdown, multi-factor authentication, and our UEFI secure boot implementation has been recognized by the National Security Agency of the United States as industry-leading. When you couple that with AMD's Infinity Guard, customers feel safe knowing that their data center is safe at its core.
That's foundational. Well, Arthur, thank you so much. We're so proud and pleased to be working with you.
Thank you for having me.
Dell's been a great partner for us and so proud of the new PowerEdge portfolio based on the 4th Gen EPYC and these important world records showing the relevance to enterprise and AI. Now, another great partner for us that has been a great partner not just in the data center side, but really across the entire breadth of the AMD portfolio, from client to data center, long-term partner of ours, is Lenovo. It's with great pleasure that I introduce to the stage somebody I've known for many, many years, Kirk Skaugen, the President of Lenovo ISG. Kirk, it's great to see you. Thanks for coming to the event.
Yeah, I thought it was great. In fact, yesterday, I flew in, ironically, from Milan here to launch Genoa. That's pretty good.
That's amazing. Yeah, I know that you've been. Thanks for making that trip. I know you've been incredibly busy. It's been a very busy time for Lenovo. You've had a lot of milestones recently. Why don't you tell the audience about that?
Yeah, it's been an amazing month for us. At Lenovo, we announced our 30th year of ThinkPad in the notebook space, but also 30 years in x86 servers, all the way back to September 21st, 1992, when we announced the first IBM PS/2 server that was on Micro Channel architecture.
Oh, man.
It came with a free mouse when you bought the server as a promotion. It's been great. We also had our earnings announcement, and you guys have tremendous momentum. We announced a 33% growth in the server business, more than 100% growth in storage, and 400% growth in Edge. This is a great time to join momentum together.
Oh, that's absolutely fantastic. I mean, you've built an incredible portfolio now to keep that growth going with AMD EPYC. Why don't you tell us a little bit about that?
Yeah. Well, I mean, it really wasn't just about celebrating 30 years in the x86 server business. We announced our largest portfolio ever, which was 52 different products. More than 3.5x Larger than we've ever announced before. Today I'm excited for Lenovo to be announcing 21 new ThinkSystem servers and ThinkAgile hyperconverged appliances, along with VMware. Great to see Raghu up there.
Yeah.
As well as Nutanix. It's our largest portfolio on AMD ever.
That's a great portfolio. A good-looking portfolio as well. Hey, I'm a server guy. I love these things. It's not just aesthetics for the data center geeks. Why don't you tell the audience and our customers why they should choose Lenovo plus AMD for their data center products?
Sure. Our vision with the Lenovo data center is to be most trusted. You know, we were just ranked in the top 10 supply chains in the world. You know, I think one of the things we're trying to do is get these products out as quickly as possible in our supply chain from a most trusted. I'm really proud. Last year, 97% of the products we delivered, we never rescheduled the order. We're excited to get these products out immediately starting today to our customer base. Between us, it's really been about the deep collaboration and deep engineering. You know, back to the IBM BladeCenter in 2004, we were the first person when it was the IBM business, before we acquired the x86 server business, to launch Opteron into blade space.
We have an incredibly deep engineering relationship today. You know, you look at where we're at relative to performance benchmarks and reliability, we're now number one in reliability in the world, for eight consecutive years. Today, we're proud that we have 99 AMD world record workloads, which is more than 2x any competitor in the world, and we're going to talk about some more today.
99, that's pretty compelling. You know, I know that whole value proposition is really resonating with some of the largest customers in the world. I know you just came back from talking to a bunch. Can you tell us about some of the recent customer deployments?
I was just with some of our largest customers in Europe, and, you know, we're adding now a public customer reference every single day. We've added over 300 public references, if you go to lenovo.com, of people that are leveraging our technology together. Hetzner Online is one of the largest data centers in Europe, hundreds of thousands of servers. We've been deploying this high-performance energy-efficient computing for them with the previous generations of AMD EPYC, and they're excited to be rolling out 4th Gen AMD EPYC. In the previous generation, a 30% improvement in their TCO, and that's just the beginning.
That is absolutely fantastic. You know, part of that, of course, is driven by the power efficiencies. The power efficiency benefits of EPYC are really catching on and catching a lot of attention. You know, as Lisa mentioned, you know, the performance is really one of the main highlights as well. Performance, of course, is nowhere maybe more important than in high performance computing. We've got Supercomputing 22 coming up next week. I know. I think you're planning on spending the week there. You've got a week made for the HPC folks in the audience. I mean, you've got a large footprint in HPC. You've got a large footprint on the TOP 500 list. I know you've got some strong, exciting announcements coming up.
You know, maybe why don't you preview some of what you're gonna be talking about next week?
Yeah. Hopefully, I'm not the only person who looks forward to Supercomputing-
You've got a few of us.
in the world. You know, we're very proud, 1/3 of the world's supercomputers run Lenovo, 162 of the TOP 500 in 18 different markets in the world. Of those 99 world records we have on EPYC, 62 are in high-performance computing. I just checked with my team before I walked in here. In three days, we're gonna pack in 189 customer visits in Dallas. There's tremendous excitement about what's happening there, and it really has to do with this Neptune warm water cooling that we're doing together. Today I'm really proud to talk about the new ThinkSystem SR665 server for sustainable computing. This is really taking the world by storm. We support 17 of the top 25 research institutions in the world, in addition to being number one on the TOP 500.
What Neptune does is, it's amazing that fans now can take up to 130 W...
Yeah.
...per server. With Neptune, we're not even requiring chillers in the data center. You bring warm water into your data center. It circulates through the system, through the CPUs, the GPUs, the power supplies, the memory, the drives. We can take 98% of the heat off the server with up to 40% lower power consumption. Even if you're at $0.15 per kWh, you can get the recovery in less than a year. We're just seeing tremendous support across the industry. One of them I wanna talk about today is SURF with their Snellius Supercomputer. They are the largest supercomputer in the Netherlands. They support scientists in more than 100 scientific institutions and universities across the country. They're in the TOP 500 list, and they're very excited.
We're gonna be deploying 700 or more nodes on 4th Gen EPYC , as a time-to-market supplier here. Very exciting and tons of energy going into supercomputing next week.
That's fantastic. The performance of supercomputers nowadays is absolutely amazing, and, more importantly, what people can do with that level of performance. I know that, you know, beyond HPC and beyond standard enterprise, you're also designing some systems for new use cases, and maybe you could tell the audience a little bit more about that.
Yeah. I mean, it's funny when you go back 30 years for that first x86 server, that's now basically in a Lenovo notebook or even going into a Motorola phone. If you think about where we're gonna be in 30 years from now, we can all make those predictions. One of the reasons we moved from the data center to the Infrastructure Solutions Group is we recognize that 75% of the data is moving to the edge. People to people, moving to people and machines, to machine to machine. We're gonna double the amount of data in the world in the next two to three years, which will be more data than's been created in the entire history of the world, if you think about that. We're only computing 2% of that data today.
We created a new ThinkEdge portfolio, and I think as we get even lower power versions of EPYC, you're going to see us more together on the telco edge for O-RAN, for NFV, and for Edge AI in the future. We're excited to be back up on stage and talking about that in the near future.
That sounds good. We'll do that soon. Kirk, thank you so much for the partnership and really looking forward to these new systems.
Thank you, Forrest. Congratulations.
Thanks a lot. You know, it's been fantastic to work with Lenovo, and it's just incredible to see this full portfolio of ThinkSystem and ThinkAgile systems based on EPYC, and that Neptune warm water cooled system is very, very cool. You know, now let me continue with another key partner that's been with us on this whole EPYC journey over the last few years, and that key other partner, of course, is Supermicro. Now, Supermicro has long been known to offer their customers finely targeted systems from a very wide range of platforms. I'm happy to say with 4th Gen EPYC, Supermicro continues this transition. Again, I'm pleased and once again honored to welcome Charles Liang, CEO, Chairman, and Founder of Supermicro. Charles, welcome. Charles, it's great to see you. Thank you for coming today. We really appreciate it.
Thank you.
You know, Supermicro, I must say, has seen incredible growth over the last few years. It's been truly remarkable to see. You know, what's been driving that exceptional growth?
Okay. As an engineering company based in Silicon Valley, we always try to deliver new technology to market as early as possible. Thank you for the strong relationship, the support. This time, again, EPYC 4th Gen, we have 15 product line all ready to ship today.
That's absolutely fantastic. I understand you built a very wide range of systems, and you've recently expanded with the rack level systems as well. Could you tell customers, you know. You clearly are making great traction with this.
Yeah.
Why do your customers, you know, love, you know, such a wide range of choices?
Oh, yeah, very good question. Indeed, last year, revenue we grew about 50%. Last quarter, our growth rate year -over -year was about 10 x faster than the industry average growth rate. The reason why, one of the key reasons, because our rack scale plug and play solution. We install the CPU node, storage node, AI platform, and switch management software, security feature all together for customer. When customer receive our product, just plug in two cable, power cable, data cable, and then they are ready to run their workload. Customer really love that, especially during the supply chain challenge time frame.
Yes.
People just like that. We help them shrink their lead time from couple months to couple of weeks. That's a very helpful solution to the market.
That's absolutely fantastic. I think, you know, you're now adding to that a rich set of new 4th generation Supermicro systems. Incredible to see. You know, tell us a little bit about these new 4th generation servers.
You know, start from the beginning, 29 years ago, when I founded a company, we designed our product based on building block solution. All the module, all the subsystem we design are compatible, optimized across different product line, even across different generation of product. It makes it much easier, much quicker for our engineer to design and deliver the new technology time to market with optimized quality and workload performance. That's why this time we have 15 different 4th Gen EPYC product line all ready to ship.
That's absolutely fantastic.
Each of them are designed specifically optimized for different workload.
That's fantastic. Targeted systems in racks that you can plug two cables in and get all that performance. That's great.
Yes.
Now, Charles, another key part of Supermicro's heritage has been your long-term focus on green computing, on sustainability. Why don't you talk to us a little bit about that? 'Cause you guys have really been out in front on this topic.
Yeah, I love green. Since 2004, we started design a green computing product, make our circuit as power efficient as possible, make our thermal system as efficient as possible, and make our system or rack scale product ready for customer to implement high operating temperature data center as easy as possible, or make liquid cooling, water cooling as easy as possible. Today we are delivering, for example, a completed rack, all with liquid cooling.
Right.
Traditionally, when people talking about liquid cooling for rack scale, cluster scale, it always take two to three months or even four months for lead time. Now we are able to make water cooling rack scale product available in two weeks for customer. So that dramatically shrink the lead time, and customer really love that.
Oh, that's fantastic. I understand you have a mission at Supermicro to drive environmental improvement.
Oh, that's great. You know, if the global market IT industry select energy saving green solution like our solution today, or if the whole industry design and implement their data center and infrastructure as green as the solution we provide, we together, globally, every year can save up to $10 billion electricity cost. That equal to about we preserve eight billion trees.
Eight billion trees.
To reduce CO₂, make our Earth more beautiful, and make our environment more healthy for our generation to come. I'm really happy to focus on green computing, and thank you a lot for AMD EPYC processing system. It make our building block solution, make our rack scale so easy. At least much easier to offer to the market.
That's fantastic. Well, that mission is pretty cool. As a forest myself, I'm all about saving trees. Now what's even greater is that these systems are available now. And you've got a couple of ways that customers can get ahold of them. Why don't you tell the audience how they can start seeing Supermicro 4th Gen AMD EPYC servers?
Yeah. We are ready to ship, again, across about 15 different product lines. Most of them are ready to ship today.
Fantastic.
Also, we offer a sample. If customer want to run their workload, welcome to order our sample. Or if customer want to save time and want to run your workload immediately in few hours, you can easily work with our sales and reach our remote, smart, JumpStart . With our JumpStart program, you can dock in the configuration in our headquarters or in Netherlands or in Taipei, and you can run your workload right away with the system, optimized, auto-configured, optimized the configuration you want. You can see your workload.
Fantastic.
How many percent performance improvement or how much time you can save. For example, Genoa Gen 4, I mean, 4th Gen EPYC processor. Although we see the system consume a little bit more power, maybe 20%-30% more power, but it deliver more than 100% better performance.
Agree. Yeah. Well, Charles, very impressive. Thank you so much for the long partnership with AMD and thank you so much for having such great products with us.
Thank you. That was my pleasure.
Thanks a lot.
Thank you.
Again, Supermicro announcing today a full portfolio of 4th Gen EPYC servers. I'd like to thank all of our incredible platform partners that were here today, that have been participating in the launch virtually, and that have produced an incredible set of platforms for past generations of EPYC and for the next generation of Genoa. I'm very pleased and proud to say that across this list, you know, the 4th Gen EPYC servers, many of which are available today and we'll be rolling out further examples throughout the year, are gonna be available on premise or in the cloud for our customers to begin seizing these benefits that Dan, Mark, and Lisa spoke to.
Now, beyond the platforms, turning to our ecosystem of solutions, when we began this EPYC journey back in Naples, just a few miles away from here, you know, we began with this close set of partners, a small set of partners that were optimizing their software and hardware to work as part of the EPYC ecosystem. We've added to that ecosystem over the last five years, and I'm very pleased to say that when you look today at the names on the wall behind me, which are just a tiny fraction of our partners around the EPYC ecosystem, we have a complete ecosystem today.
No matter what workload you're running, from finite element analysis to transaction processing systems, whether you're an enterprise with one of the largest data centers in the world, or if you're a small or medium business, EPYC and our partners ecosystem is ready to run your workload in an optimized fashion today. Now with that, I'd like to turn it back over to Lisa to close out the morning. Thank you very much.
All right. Thank you, Forrest. Has it been a fantastic morning? I will say as much as Forrest, Mark, Dan, and I love telling the EPYC story, what I love even more is our partners telling the EPYC story. I really wanna thank all of our partners who've joined us on stage today, Microsoft, Oracle, HPE, Google, VMware, Dell, Lenovo, and Supermicro. It's that close partnership that really brings these products to life and really enables us to deliver these incredible performance and efficiency things. Let me now wrap things up. You know, I started at the beginning with saying our goal with EPYC was to build the best data center CPU roadmap in the industry, and I think we've done that. With 4th Gen EPYC, we have delivered another major step forward in performance and efficiency, making the best server processor roadmap even better.
We're really just getting started. When you think about, you know, the 4th Gen EPYC family, we've really broadened the portfolio because the data center is becoming much more complicated. There are many more workloads, and optimized solutions can really bring a lot for specific workloads. In the first half of 2023, we'll launch Bergamo, which is increased core density. It will allow us leadership scale out and particularly optimized for cloud native workloads. As I said earlier that product is looking fantastic in the labs right now, so we're looking forward to that. You heard Microsoft talk about Genoa-X, and, you know, what we've seen with our 3D V-Cache technology is it adds tremendous capability for specific workloads, especially technical computing.
We're also ready to launch that in the first half of 2023. In the second half of the year, we talk about Siena, which actually extends our EPYC portfolio into telco and intelligent edge, which really is focused on sort of maximum performance per watt at the right cost point. You can see that we're taking the Zen 4 capability and really extending it through the entire set of workloads. In addition to our commitment to CPUs, we're also committed to being the partner of choice across the full spectrum of engines that you need in the data center. That includes our Instinct GPUs, our FPGAs, our adaptive SoCs, our SmartNICs, and our DPU products. You can really see that we have the full spectrum of what you need.
Again, I wanna say thank you so much for joining us today. It is really our honor and our pleasure to be able to share with you all of the exciting technology. It's an exciting day for AMD. I think it's an exciting day for the industry, and our goal is to bring the best to the data center together with our partners. Thank you so much.