We will now proceed to the AMD opening keynote speech. TAITRA Chairman James Huang will help us introduce our keynote speaker. Please, once again, welcome James to the stage.
Now, the moment you have been waiting for. As Chair and CEO of AMD, Dr. Lisa Su has led a company's transformation into a powerhouse of high-performance computing. Under her visionary leadership, AMD has achieved remarkable success. Dr. Su was recently named the 2024 Chief Executive of the Year by Chief Executive Magazine, recognized for her role in one of the most spectacular achievements in technology sector. Dr. Su's influence extends beyond AMD. She has been a key advocate for the integration of AI across industries, emphasizing its transformative power. Her commitment to innovation and collaboration is evident in her leadership style, which focuses on the development of cutting-edge solutions while fostering an inclusive and forward-looking company culture. Now, on behalf of COMPUTEX, I'm very pleased to welcome our old friend, Lisa, but we are going to share a video from AMD first.
AMD does more than advance AI. AMD makes the limitless potential of AI possible, from AI PCs, to edge, to cloud. Powered by some of the most advanced GPUs, CPUs, and NPUs on the planet, and enabled by an open software approach that's accessible to all. Together with our partners, AI from AMD helps make more imagination possible, innovation, breakthroughs, and healing possible, peace of mind and thrills possible. The impossible is now possible.
Ladies and gentlemen, please join me in welcoming Dr. Lisa Su, Chair and CEO of AMD. Thank you, Lisa.
Thank you so much.
Thank you so much.
Thank you. Thank you. Good morning! Good morning. Thank you, James, for that very, very warm introduction, and welcome to everyone joining us today in Taipei and from around the world as we open COMPUTEX 2024. Every year, COMPUTEX is such an important event for our industry as we bring together all members of the ecosystem to share new products, to talk about new innovations, and really discuss the future of technology. But this year is even more special. With the rapid innovation around AI and all of the new technology everywhere, it is actually the biggest and the most important COMPUTEX ever, and I'm so honored to be here to open the show. Now, we have a lot of new products and news to share today, so let's just go ahead and get started.
Now, at AMD, we're all about pushing the envelope in high performance and adaptive computing to help solve the world's most important challenges. From cloud and enterprise data centers to 5G networks, to healthcare, industrial, automotive, PCs, gaming, and AI, AMD technology is everywhere, powering the lives of billions of people every day. AI is our number one priority, and we're at the beginning of an incredibly exciting time for the industry as AI transforms virtually every business, improves our quality of life, and reshapes every part of the computing market. AMD is uniquely positioned to power the end-to-end infrastructure that will define the AI computing era, from massive cloud servers and enterprise clusters to the next generation of AI-enabled intelligent embedded devices and PCs. Now, to deliver all of these leadership AI solutions, we're focused on three priorities.
First, it's delivering a broad portfolio of high-performance, energy-efficient compute engines for AI training and inference, including CPUs, GPUs, and NPUs. Second, it's about enabling an open and proven and a developer-friendly ecosystem that really ensures that all of the leading AI frameworks, libraries, and models are fully enabled on AMD hardware. And third, it's about partnership. It's really about co-innovating with our partners, including the largest cloud, OEM, software, and AI companies in the world, as we work together to bring the best AI solutions to the market. Now, today, we're gonna talk about a lot of new technologies and products, including our brand new Zen 5 core, which is the highest performance and most energy-efficient core we've ever built, and our next generation XDNA 2 NPU core that enables leadership, performance, and capabilities for AI PCs....
We're also gonna be joined by a number of our partners as we launch our new Ryzen notebook and desktop processors, and preview our data center CPU and GPU portfolio for this exciting AI world. Let's go ahead and get started with gaming PCs. Now, at AMD, we love gaming. Hundreds of millions of gamers everywhere use AMD technology, from the latest Sony and Microsoft consoles, to the highest-end gaming PCs, to new handheld devices like the Steam Deck, Legion Go, and ROG Ally. Today, I'm excited to show you what's next for PC gaming with Ryzen. Our new Ryzen 9000 CPUs are the world's fastest consumer PC processors, bringing our new Zen 5 core to the AM5 platform, with support for the latest IO and memory technologies, including PCIe 5 and DDR5. I'm happy to show you now our brand-new Zen 5 core.
Zen 5 is actually the next big step in high-performance CPUs. It's a ground-up design that's extremely high performance and also incredibly energy efficient. You're gonna see Zen 5 everywhere, from supercomputers to data centers and PCs. And when you look at the technology behind this, we have so much new technology. We have a new parallel dual pipeline front end, and what this does is it improves branch prediction accuracy and reduces latency. It also enables us to deliver much more performance for every clock cycle. We also designed Zen 5 with a wider CPU engine and instruction window to run more instructions in parallel for leadership compute throughput, and efficiency. As a result, compared to Zen 4, we get double the instruction bandwidth, double the data bandwidth between the cache and floating-point unit, and double the AI performance with full AVX-512 throughput.
All of this comes together in the product in Ryzen 9000 series, and we're delivering an average of 16% higher IPC across a broad range of application benchmarks and games compared to Zen 4. So now let me show you the top-of-the-line Ryzen 9 9950X for the very first time. There you go. We have 16 Zen 5 cores, 32 threads, up to 5.67 GHz boost, a large 80 MB cache at a 170-watt TDP. This is the fastest consumer CPU in the world. Oh, thank you. Okay, so let's take a look at some of the performance. So when you compare it to the competition, the 9950X delivers significantly more compute performance across a broad suite of content creation software.
In some of them, you actually see, like Blender, that take advantage of the AVX-512 instruction throughput, we're actually seeing up to 56% faster than the competition. In 1080 P gaming, we know all of our fans love gaming, the 9950X delivers best-in-class gaming performance across a wide range of popular games. Now, with desktops, we know that enthusiasts wanna have an infrastructure that let you upgrade across multiple product generations, and with Ryzen, we've done just that. Our original Ryzen platform, Socket AM4, launched in 2016, and now we're approaching the ninth year, and we have 145 CPUs and APUs across 11 different product families in Socket AM4. We're actually still launching new products. We actually have a few Ryzen 5000 CPUs that are coming next month.
And we're taking this exact same strategy with Socket AM5, which we now plan on supporting through 2027 and beyond. So you're gonna see AM5 processors from us for many, many years to come. Now, in addition to the top-of-the-stack Ryzen 9950X, we're also announcing the 12-, 8-, and 6-core versions that will bring the leadership performance of Zen 5 to mainstream price points, and all of these go on sale in July. So now let's shift gears from desktops to laptops, and there's gonna be a lot of discussion about laptops at Computex this year. AMD has been actually leading the transition to AI PCs since we introduced our first generation of Ryzen AI in January last year. Now, AI is actually revolutionizing the way we interact with PCs.
It enables more intelligent, personalized experiences that will make the PC an even more essential part of our daily lives. AI PCs enable many new experiences that were simply not possible before. These are things like real-time translations that will allow us to collaborate in new ways, things like generative AI capabilities that accelerate content creation, and also, we each want our own customized digital assistant that really will help us decide what we need to do and what we should do next. So to enable all of this, we actually need much, much better AI hardware, and that's why we're so excited to announce today our third-gen Ryzen AI processors. Our new Ryzen AI series actually is a significant increase in compute and AI performance and sets the bar for what a Copilot PC should do. Thank you, Drew. Here we go. This is Strix. ...
Strix is our next-generation processor for ultrathin and premium notebooks, and it combines our new Zen 5 CPU, faster RDNA 3.5 graphics, and the new XDNA 2 NPU. Thank you. And when you look at what we have, it really is all of the best technology on one chip. We have a new NPU that delivers an industry-leading 50 tops. We're gonna talk about tops a lot, today. 50 tops of compute that can power new AI experiences at very low power. We have our new Zen 5 core that enables all the compute performance for ultrathin notebooks, and we have faster RDNA 3.5 graphics that really brings the best-in-class application acceleration, as well as console-level gaming to notebooks. Now, we have a couple of SKUs.
The flagship Ryzen 9 HX 370 has 12 Zen 5 cores, 24 threads, 36 MB of cache, the industry's most integrated NPU, and our latest RDNA graphics. Strix is simply the best mobile CPU. So let me talk a little bit about what's special in this new NPU. NPUs are really new, and they're really there for all of these AI applications and workloads. Now, compared to our prior generation, XDNA 2 features a large array of 32 AI tiles with double the multitasking performance. It's also an extremely efficient architecture that delivers up to 2 times better energy efficiency of our prior generation when running Gen AI workloads.
And if you look at the performance of Strix compared to other chips in the market, and there are a lot of chips that are coming out with new NPUs, XDNA delivers the highest performance, leadership 50 TOPS of INT8 AI performance. And what this means is that third gen Ryzen AI will deliver the best NPU powered experiences in a Copilot + PC. But let me just go a little bit deeper so you understand the technology. You know, every NPU is actually not the same when it comes to generative AI capabilities. Different NPUs actually support different data types, and that says something about the accuracy and the performance of the devices. So for generative AI, 16-bit floating point data types are great for accuracy, but they actually sacrifice performance. And the current standard for NPUs is actually 8-bit integer data types.
They prioritize performance, but they sacrifice accuracy. What this means is that developers really have a tough choice to make between offering either a more accurate solution or a more performance solution. Now, XDNA 2 is the first NPU to support block 16 16-bit floating point. What that means is block FP16 actually combines the accuracy of 16-bit data with the performance of 8-bit data. This represents a huge leap forward in AI capability and enables developers to run complex models natively without any quantization steps at full speed, and what that means is with no compromise. So let me show you what this looks like. When you look at the example, these are 3 images that are generated by the popular Stable Diffusion XL Turbo Gen AI model.
We use the same prompt with no quantization or retraining for all three, and the only difference is actually the data type. So INT8 is on the left, which is what most NPUs are using. Block FP16 is in the middle, which is what XDNA 2 has, and then FP16 is on the right, which is the more traditional format. And as you can see, the two FP16-bit images look much better, with no real differences between the two. And it is only because our NPU supports that block FP16, that Ryzen AI is capable of generating the significantly better images in the same time that it takes to generate the lower quality INT8 images.
And this is just an example of why we believe that NPUs with the right data types are the best for the next generation PCs, and this is why we believe that XDNA 2 is the best NPU in the industry. Now, Microsoft is a great partner and is really leading the AI era, and we've been working very, very closely with them to bring Copilot Plus PCs to market with Strix. So to hear more about the work we're doing together, I'd like to welcome Pavan Davuluri, Corporate Vice President, Windows Devices at Microsoft, to the stage.
Lisa, it's a pleasure.
Pavan, so wonderful to see you. Thank you so much for joining me here in Taipei.
Absolutely.
You know, I know it's been a super busy time for you guys. So much is going on. Can you tell us a little bit about what's been going on?
Thank you, Lisa, for having us here today. It is an honor to be here at Computex with all of you. It has been a really busy couple of weeks for us here at Microsoft. We announced a new category of PCs built for AI, the Copilot Plus PCs. To realize the full power of AI in a PC, we re-engineered the entire system, from chip through every layer of Windows. These are the fastest, most performant, most intelligent PCs, and we are thrilled to partner with AMD on Strix-based Copilot Plus PCs.
Thank you. We are too.
Lisa, I truly believe we're at an inflection point here, where AI is making computing radically more intelligent and personal, and we've collaborated with AMD since day one on this, and I'm very excited about that.
Pavan, there's no question, Microsoft and you and your team have really led, you know, this whole AI PC era. We've always talked about user experiences. Can you talk a little bit about, you know, what you were thinking with Copilot PCs and this integration between operating systems, hardware, and software?
Sure, absolutely. The first thing I think customers will see with Copilot+ PCs is that they're just simply outstanding PCs. These devices will have leading performance and best-in-class battery life, and every app will work great on these machines. Now, for those next-gen AI experiences, how about we just take a look, Lisa?
Sounds good. Fantastic, Pavan.
So what you just saw there, those devices and those experiences, have on-device AI that is powerful enough to keep up with all of the experiences we want and efficient enough to be always running. For example, you just saw Recall that helps users instantly find anything on their PC, and that's only possible because we can semantically index content in the background, which requires always-on, high-performance AI. Cocreator lets you generate high-quality images by drawing in Paint, and we can do that with fast on-device image generation. We have Live Captions, which will translate your audio in real time on a PC, switching automatically between languages, in fact. But I truly think what you just saw, Lisa, is just the beginning. We built this thing called the Windows Copilot Runtime, which is effectively our library of APIs to let developers access new AI capabilities built into Windows.
Those capabilities are also backed by Microsoft's responsibilities around responsible AI, and I truly think we are gonna be blown by what partners build, in addition to what Microsoft and AMD is bringing.
I completely agree. I think this is really bringing the entire ecosystem together. And, you know, one of the things, Pavan, that you and I have talked a lot about is the importance of the hardware.
Yes.
It's all about how do we give you enough power such that you can run these Copilot+ PCs? Can you talk a little bit about your vision there?
Sure, absolutely. With Copilot+ PCs, we wanna make it possible to deliver these next-gen AI experiences by using on-device capabilities and do that in concert with the cloud. On-device AI really, for us, means faster response times, better privacy, lower costs, but that means running models that have billions of parameters in them on PC hardware. Compared to traditional PCs, even from just a few years ago, we're talking 20 times the performance and up to 100 times the efficiency for AI workloads. And to make that possible, every Copilot+ PC needs an NPU that's at least capable of 40 TOPS, and we're deeply grateful for the close partnership with AMD. We are thrilled that Strix Point's NPU delivers an incredible 50 TOPS. That is super, super powerful for us.
We wanted to give you more.
We are always ready. The powerful thing with that, of course, it means we're efficiently delivering these Copilot+ experiences, but also gives us headroom for the next generation of AI, and we're at the start of that runway. Of course, the Copilot+ PCs complement that power of the NPU with at least 16 GB of RAM and 256 GB of SSD storage. So I truly think these devices are built for the era of AI that's coming.
You know, one of the things I can share, Pavan, is, you know, as we talk, you guys are always pushing us to give you more. You're always saying, "More tops, Lisa, more tops.
Yeah.
What are you doing with all those tops? Like, what's your vision for the future?
I do remember those conversations, Lisa.
By the way, it takes a lot of die area, just so you know, so-
I, I can only imagine. We are deeply excited about those commitments and, quite frankly, the deep collaboration across our teams to go bring that to life. For us, these breakthrough experiences require us to run these billion-parameter models always running on the device, and that requires high-performance NPUs to power them. And really, thanks to our deep partnership, we've been able to seamlessly cross-compile and execute over 40 on-device models on these AMD NPUs, which is very meaningful for us. We took advantage of all of the low-level software and hardware capabilities of the AMD silicon here, so we did not lose any performance nor efficiency. Also, these high-performance NPUs are really the best way to drive overall PC performance.
Getting to 50 TOPS, for example, is a quantum leap for us, for sure, and it's really much, much more impactful relative to what you could do with just a CPU or GPU alone. And what the other thing that excites me, really, is that these powerful NPUs then free up the GPUs and the NPUs for workloads where they shine. So I'm excited to see what developers will do with this going forward.
We are super excited as well. Thank you, Pavan, for being here. Thank you for your partnership, and thank you for leading the industry.
Thank you. Thank you. Thank you.
... So in addition to Microsoft, we're also working with all of the leading software developers, including Adobe, Epic Games, SolidWorks, Sony, Zoom, and many others, to accelerate the adoption of AI-enabled PC apps. And by the end of 2024, we're on track to have more than 150 ISVs developing for AMD AI platforms across content creation, consumer, gaming, and productivity applications. Now, to give us a look at some of these upcoming Copilot Plus PCs, let me welcome our next guest, a very close partner and good friend, Enrique Lores, HP President and CEO. Hello, hello. Enrique, thank you so much for being here. It's always fun to talk about what's next with, our teams have been working on.
Actually, thank you for having me here, and congratulations for all the announcements you have made today.
There's a lot more to come. Now, look, Enrique, you and I have talked a lot about the intersection of AI and hybrid work in recent months. What are you seeing in the industry?
I think this is actually what makes many of the announcements that we're making today very exciting. It's not only about the technology improvements that we are going to see, which you have explained extremely well; it's about how they are gonna be helping employees and companies to meet their goals. What we see today is a significant tension between all of us as companies that want to continue to improve the productivity of our teams, and our teams that are looking for increased flexibility, the ability to meet their personal and private goals. And we think that technology and AI can really help to close that bridge because it can help to improve productivity and, at the same time, give the flexibility that our teams are looking for. And we look at AI PCs as the first instantiation of this change.
They will be enabling to increase productivity. We are gonna talk about some of the new functionalities, and you're gonna see how unbelievable they are, but at the same time, make sure that employees can deliver on their goals and meet their productivity goals.
Yeah, and we've talked a lot about how, you know, in this, you know, hybrid world, you know, people are really wanting all of these different features. Can you talk a little bit about that?
I think what we have learned during the last years is that what is really critical for all of us that develop the systems that our teams are gonna be using, is that we co-engineer the solution. It is not anymore about someone developing the software, someone developing the operating system, someone developing the chip, someone developing the hardware. We need to understand what experience we are building and deliver that experience together. This is something that we started a few years ago, and the teams have been learning how to make that happen and how to co-engineer these solutions. All the products that we have introduced during the last year, especially, for example, a product we introduced two weeks ago, the Pavilion Aero, show that.
This is going to be even more important as we show the new AI PCs that we are gonna be bringing to market because we have made an effort to integrate the new processors, the new chips, into the solutions that we are gonna be bringing together. We are incredibly excited about the new family of products we will be launching in a few weeks.
So-
Because-
I think you have something to show us. Is that right, Enrique?
I hope so. This is actually the new generation. Since we have done together, we can show it together.
That's wonderful.
This is the next generation OmniBook. When we integrate the latest Ryzen AI 300 Series, that, as Lisa was saying before, will be the first product that will have 50 TOPS integrated in the device. Performance, as Pavan was saying, is critical because will enable us to continue to deliver incredible experiences to our customers. If you ask me what I'm more excited about, it's something that is gonna be very close to many people in this room. We, HP, have a very large team here in Taipei, and many. We spend many hours in video conferences, video conferences, and Zoom calls. As you can notice, I have a strong Spanish accent. I know that for the team here, understanding my accent sometimes is difficult. So just imagine that you can get real-time translation, so I will speak my Spanish English-
Yeah, yeah, it's pretty good.
They will hear the Chinese.
Come on, Enrique.
That's gonna make a big difference in productivity.
No, I think that's... By the way, I think your English is pretty good, Enrique, but I understand. My Chinese also needs to be better, so I totally-
We can help each other.
Yes. Look, we love it. I mean, we love the OmniBook and all the work that we've done together with third-gen Ryzen AI processors. But let's actually give everyone a preview of GenAI running on OmniBook. So let me show you again the popular Stable Diffusion XL Turbo model, which is generating some high-quality images of locations around Taiwan based on some simple text prompts. So starting with the White Cliffs of Taroko Gorge National Park, Sun Moon Lake with nice fall colors, Taipei 101, and then finally the peak of Jade Mountain. All of this is running on the OmniBook. You're seeing it for the first time. And the reason those pictures are so beautiful is because we have a very, very powerful NPU. We've co-engineered the system together.
We have the Block FP16 data support that I talked about, and you can see these beautiful photorealistic images almost instantaneously.
... Yeah, so just imagine the productivity this is going to provide to product managers, creatives that are going to start creating their designs, and just with this solution, they will really be able to accelerate their work. So unbelievable progress.
Thank you so much, Enrique. Thank you for your partnership. I can't wait for everything that we're going to bring out together.
Thank you, Lisa. Great to be here.
Thank you.
Thank you.
Thank you. So I showed you earlier that third-gen Ryzen AI has the most powerful NPU, but you also need a high-performance CPU and GPU to deliver the best PC experiences possible. So let's take a look at some of that other performance. When we compare Ryzen AI 300 Series to all of the latest x86 and Arm CPUs from our competitors, you can see why we say Strix is really the best notebook CPU in the market. Whether you're looking at single-threaded responsiveness, productivity applications, content creation, or multitasking, third-gen Ryzen AI processors deliver significantly more performance, many times often beating the competition by a large double-digit percentage across a broad range of use cases. Now, let's welcome another one of AMD's closest partners in the development of Copilot Plus PCs. Let's welcome Luca Rossi, President of Lenovo Intelligent Devices Group.
Hey!
Luca.
Good morning, Lisa.
Wonderful to see you, Luca. Thank you so much for joining us today. We so appreciate the partnership, and Lenovo and AMD are doing so much together.
Yeah. So thanks for having me, Lisa, and strong partnerships are core to Lenovo's strategy. Our long-term partnership with AMD, as you know, Lisa, spans over 25 years and is a testament to this. Together, I think we have driven incredible innovations across PCs, mobile gaming, servers, tablets, edge computing, and for example, our ThinkStation P620 was the first Threadripper Pro workstation to deliver unprecedented performance and flexibility to power AI renderings and workflows. Beyond hardware, the Lenovo AI Engine Plus software use machine learning in our Legion gaming laptops, integrates with AMD Ryzen processors to dynamically adjust settings and tailor for epic gaming experiences at the highest performance. Our AMD powered devices, ThinkPad, ThinkBook, Yoga, Legion, are all well equipped to handle AI applications, accelerate video editing, enhance 3D rendering, and elevate gaming to new heights.
Lisa, we are very excited for all the innovation that AMD is introducing, and Lenovo definitely will be a great partner to deliver them to the global markets. We can do this because of our global scale, top-notch engineering and design, and then the operational excellence as the world number one-
Thank you
PC maker, Lisa.
Thank you so much, Luca. And, you know, Luca, when, you know, we think about all the work we're doing together, you know, today we're talking about third gen Ryzen AI devices, and I know, your team has done a lot, our teams have done a lot together. Can you talk a little bit about your lineup and some of the AI experiences that you have?
Yeah. Yeah, yeah, with pleasure. So later this year, we are going to launch Lenovo AI laptops with the third gen Ryzen AI processors for consumers through our Yoga franchise, for commercial with our legendary ThinkPad brand, and for small and medium businesses through our ThinkBook lineup. And no matter if you are a creator, an enterprise professional, or a startup entrepreneur, Lenovo will have the perfect Copilot Plus laptop with third gen Ryzen AI, and congratulations, operating at the industry-leading over 50 TOPS. Congratulations for that, Lisa.
Thank you.
We'll have also some exclusive Lenovo AI experiences coming to the market this year. One is Creator Zone. We have co-engineered this with AMD on fine-tuning the AI model, and this is an exclusive Lenovo software tailor-made for creators, providing tools and features to boost creativity and productivity. Maybe let's take a look at how this software works. First, let me introduce Lenovo AI Now. That's our native language agent that runs locally on the device. One of the things that make this so special is its personal knowledge base. With the right permissions, Lenovo AI Now can interpret user data to provide faster and personalized output. You have seen Lenovo AI Now going through a script and generating both a thumbnail and a description to post on YouTube and share with the world. See, it was very easy, right?
Absolutely.
Now, let's say the user wants to create images of a fish to post on social media. All they have to do is to use Lenovo Creator Zone. That's our other IP. With text to image, Creator Zone can generate an image based off any idea, and if the image needs further refining, the sketch to image function can assist the user. And then all of this, all of these images were created with the same prompt and locally on the device with a built-in responsible AI check feature.
... It's fantastic.
So we have invested significantly in R&D, as you know, Lisa, and we built a unique Lenovo IPs running on device AI workload, including LLM compression, performance improvement algorithms. And we know, we are confident that our third gen Ryzen AI offering will stand out from the competition. And last but not least, and then I'm over, we also have created Smart Connect, another Lenovo IP that unifies AI PCs, tablets, smartphone, and other IoT into the same Lenovo ecosystem.
That's fantastic. Just, it's wonderful to see all of these pieces come together. Now, Luca, you've been holding something-
Mm.
and I, I think you're gonna show us what it is.
Mm.
Is that right?
Well, I wasn't supposed to even show this yet, since this is something we will not announce until later this year. But Lisa, you're right. I felt it's just too exciting to-
You, you look very happy, actually.
Yeah, I'm very happy. Straight from our R&D, straight from our R&D lab, this is the first ever sneak peek at our new Yoga laptop, powered by third-gen Ryzen AI.
That's beautiful.
Maybe we want to do left... and right. Yeah. So I can't share more for today, but I can,
That's very beautiful. Thank you.
But I can, I can tell you, this device represent a significant leap forward in next-generation AI computing, will include some of the Lenovo-exclusive IP AI features that I mentioned, and this is just the beginning. We cannot wait together to share more and bring these transformative AI PCs to the world very soon.
Thank you.
Lisa, thank you for having me, and thanks, everyone.
Thank you so much, Luca.
Thank you.
Thank you.
Thank you so much.
Thank you. Thank you.
Thank you.
You can see we get very excited about our products. Next, I'd like to welcome one of the most important visionaries and innovators in the Taiwan ecosystem, and a very, very close partner, Johnny Shih, Chairman of ASUS.
Hi, Lisa.
Johnny. Thank you. Thank you so much for being here.
Thank you, Lisa. Yeah, I think, it's really my great honor to join you on stage, especially since you are now a legend of the computing industry and the pride of Taiwan.
Yes!
Thank you. Thank you. Thank you. Thank you. Johnny, you are actually a true visionary. I think we all have so much respect for you. You've shaped this industry for so many years. Can you just tell me a little bit about how, you know, the landscape of computing is changing, and how do you see AI?
Yes, Lisa. AI PC will be one of the most disruptive innovations of our lifetime. The ubiquitous AI era is the mega paradigm shift that we have long envisioned at ASUS, and I'm so overjoyed that it's finally becoming a reality. The world will be full of AI brains that come in different forms and sizes, including super, like the 1.8 trillion parameters with MOE. Big, medium, small, and even tiny, like less than 1 billion parameters. From the cloud, to the edge, to PCs and devices like phones and robots. AI PCs will play a critical, extremely critical role in this new distributed hybrid AI ecosystem.
Imagine an AI PC with small language model types of AI brains, capable of acting as a personal agent, who can understand and help you with your personal needs, preferences, and even work while complementing the super brain in the cloud, with local advantages, local latency, high security, and personalization, all the while offloading the cloud computing needs, especially for inferencing. Isn't it incredible? This will benefit user productivity like video editing, design work, and scientific computing, and a lot more. This vision is amplified and possible because of our partnership with AMD and the launch of the third gen Ryzen AI. We are definitely co-innovating at the forefront of AI PC together.
It is so inspiring, Johnny, to hear you talk with such passion. I think we all can feel your passion going forward. So, can you tell us a little bit about your new portfolio of AI PCs with third gen Ryzen AI?
Of course, Lisa. I think later today, at 4:00 P.M., we'll be unveiling a range of cutting-edge AI PCs across our portfolio with brand-new Zenbook, ProArt, Vivobook, ASUS TUF, and ROG laptops, powered by the third-gen AMD Ryzen AI processors. These new lineups are equipped with the world's most powerful NPU, with 30 TOPS and the superior AMD Zen 5 architecture that leads the industry in compute and AI performance. The third-gen Ryzen AI processor is the catalyst to bring personalized computing to everyone, from content creators to gamers and business professionals, and empowering them like never before. This advancement give the new Zenbook higher AI performance than MacBook, while making it thinner and lighter as well. ASUS is so proud and honored to be the first OEM partner to make the third-gen Ryzen AI systems available to consumers.
It will be ready for purchase in July. Isn't it incredible? Thank you.
These are super beautiful systems, Johnny.
Thank you.
You know, it's, it's also about the experiences, and I know that ASUS has also done a lot to create some new experiences for content creation and creativity. Can you tell us a little bit about that?
Sure. We have been working closely with AMD to integrate and optimize the incredible power of Ryzen AI processors on our ASUS Copilot Plus PC lineups. This enable us to create exclusive and unprecedented AI apps to empower users to be more efficient and creative than ever before. A great example is one of our recently launched AI apps called StoryCube. Content creators who use multiple devices will love it. StoryCube is an AI-powered digital asset management app designed to provide you with a seamless and efficient file organizing experience. It can act as a handy assistant by automatically identifying your loved one's faces, and even detecting and sorting your media into various scenes, such as scenic road trips, skiing adventures, or adorable puppy moments.
With the 50 TOPS capability of the Ryzen AI NPU, StoryCube can drastically shorten AI categorization time from the tens of seconds, running only on a CPU, to just a blink of an eye.
Thank you. You know, Johnny, look, it's really exciting what we're doing on AI PCs, but AMD and ASUS have also had a very long history of partnering across motherboards, graphics cards, and embedded systems. As we go forward, can you comment about some of that going forward?
Sure. It has been a great history together, creating incredible products like the original Crosshair motherboard. ASUS was even the first to push Ryzen gaming system, which received an incredible response from users. Last year, ASUS introduced the first ROG Ally handheld device, which also adopted the AMD Z1 Extreme chip. We have solid leadership in the AMD Ryzen 9 segment, with 60% market share, and we are excited about expanding our partnership to offer AI solutions for specific industries like healthcare, education, and smart cities, with the goal of revolutionizing these sectors with powerful on-device AI applications. That's why we are so excited to work together in the next-gen AI PC space.
By combining cutting-edge AI AMD hardware, like Ryzen processors and ASUS software expertise, rooted in our design thinking philosophy, we are pushing the boundaries of AI PC innovation and delivering truly groundbreaking AI experience to users.
Johnny, all I can say is you are an inspiration to us all.
Thank you.
Thank you so much for everything that you've done for our industry, and thank you for your partnership with AMD.
Thank you for the great partnership. Thank you very much.
Thank you.
Thank you.
Thank you. So I hope you got a feel for all of the customer excitement around third-gen Ryzen AI PCs. I'm very happy to say that the first notebooks will be available in July, and we have more than 100 consumer and commercial notebook design wins with Acer, ASUS, HP, Lenovo, MSI, and others. So lots of things to come. So now let's transition from PCs to the edge, where our embedded and adaptive solutions are bringing AI to a diverse set of markets and devices. AMD AI platforms are already broadly deployed at the edge. In healthcare, AMD chips are improving patient outcomes by enhancing medical imaging analysis, accelerating research, and assisting surgeons with precision robotics. In automotive, AMD AI solutions are powering the most advanced safety systems. And in industrial, customers are using AMD technology for AI-assisted robotics and machine vision applications.
We are number one in adaptive computing today, and thousands of companies have adopted our XDNA AI adaptive and embedded technologies to power their products and services. Let me just give you a few examples. So Illumina is a global leader in genomics, and they use EPYC and AMD adaptive SoCs with their Splice AI software to identify previously undetectable mutations in patients with rare genetic diseases. In automotive, Subaru's industry's leading EyeSight ADAS system uses Versal to analyze every frame captured by the front camera, and that allows them to identify and alert the driver of possible safety hazards. Hitachi Energy uses AMD adaptive computing products in their widely deployed high-voltage direct current solutions to detect potential electrical issues before they become large problems and cause power outages.
Canon has adopted Versal to power the AI-based free viewpoint video system that captures high-resolution video from over 100 cameras simultaneously, and that allows viewers to experience live events from every angle. Now, AI at the edge is actually a hard problem. It requires the ability to do pre-processing, inferencing, and post-processing all within the device, and only AMD has all of these pieces needed to accelerate end-to-end AI at the edge. So we combine the adaptive computing engines for pre-processing sensor and other data with AI engines for inferencing, and then high performance embedded compute cores for post-processing decision-making. Now, today, when you do this, this requires three separate chips, and with our new Versal AI Edge Gen 2 series, we bring all of this leadership compute together to create the first adaptive solution that integrates pre-processing, inferencing, and post-processing in a single chip.
So today, we're announcing early access for our next gen Versal platform. More than 30 of our strategic partners are already developing edge AI devices powered by our new single-chip Versal solution, and we are incredibly excited about the opportunity to drive AI at the edge and see significant opportunities to extend our embedded market leadership with these new technologies. Okay, now let's turn to the data center. We've built the industry's broadest portfolio of high-performance CPU, GPU, and networking products. When you look at modern data centers today, they actually run many different workloads. They range from traditional IT applications to smaller enterprise LLMs, to large-scale AI applications.
You need different compute engines for each of these workloads, and only AMD has the full portfolio of high-performance CPUs and GPUs to address all of these workloads, from our EPYC processors that deliver leadership performance on general-purpose and mixed inferencing AI workloads, to our industry-leading Instinct GPUs that are built for accelerating AI applications at scale. Today, I'm gonna share details of our next-generation data center CPU and GPU offerings. Let's start first with CPUs. If you take a look at the CPU market, EPYC is actually the processor of choice for cloud computing, powering internal workloads for all of the largest hyperscalers and more than 900 public instances from all the major cloud providers. Every day, billions of people around the world use cloud services powered by EPYC. That includes Facebook, Instagram, LinkedIn, Microsoft Teams, Zoom, Netflix, WeChat, WhatsApp, and many, many more.
All of that is on EPYC. Now, we launched EPYC in 2017, and with every generation, more and more customers have adopted EPYC because of our leading performance, our leading energy efficiency, and our leading total cost of ownership. And I'm very proud to say that we're at 33% share now and growing. Now, when you look at today's data centers, most data centers are actually powered by processors that are more than 5 years old. And when you look at the virtualization performance of our latest generation server CPUs, the new technology is so much better, that EPYC delivers 5 times more performance compared to those legacy processors. And even comparing to the best processors today from the competition, our performance is 1.5 times faster.
Many enterprises today are actually looking to modernize their general-purpose computing infrastructure and add new AI capabilities, often within the same footprint. And by refreshing their data centers with fourth gen EPYC, you can really accomplish this. You can actually replace five legacy servers with a single server that reduces rack space by 80% and consumes 65% less energy. Now, many enterprise customers are also wanting to run a combination of general-purpose and AI workloads without adding GPUs, and EPYC is again, the best option for that. Looking at EPYC, we are 1.7 times faster when running the industry standard TPCx-AI benchmark that measures end-to-end AI pipeline across different use cases and algorithms. Fourth gen EPYC is clearly the industry's best server CPU, but we're always pushing the envelope to deliver more performance. So I have something to show you today.
It's actually the preview of our upcoming 5th-gen EPYC processor, code-named Turin. So please take a look at Turin for the very first time. Turin features 192 cores and 384 threads and has 13 different chiplets built in 3- and 6-nanometer process technology. There's a lot of technology on Turin. It supports all the latest memory and IO standards, and is a drop-in replacement for our existing 4th-gen EPYC platforms. Thank you, Drew. Turin will extend EPYC's leadership in general-purpose and high-performance computing workloads. So let's take a look at some of that performance. NAMD is a very compute-intensive scientific software that simulates complex molecular systems and structures....
When simulating a 20 million atom model, a 128-core version of Turin is more than three times faster than the competition's best, enabling researchers to more quickly complete models that can lead to breakthroughs in drug research, material science, and other fields. Now, Turin also excels at AI inferencing performance when running smaller large language models. So I wanna show you a demo here. Now, what this demo compares is the performance of Turin when running a typical enterprise deployment of Llama 2 virtual assistants, with a minimum guaranteed latency to ensure a high-quality user experience. Both servers begin by loading multiple Llama 2 instances, with each assistant being asked to summarize and upload a document.
Right away, you can see that the Turin server on the right is adding double the number of sessions in the same amount of time, while responding to user requests significantly faster than the competition. While the other server reaches a maximum number of sessions, you'll see it stop soon; it basically can't support the latency requirements anymore. Turin continues scaling and delivers a sustained throughput of nearly 4 times more tokens per second. That means when you use Turin, you need less hardware to do the same work. In addition to leadership summarization performance, Turin also delivers leadership performance across a number of other different enterprise AI use cases, including 2.5 times more performance when translating large documents, and more than 5 times better performance when running a support chatbot.
Our customers are super excited about Turin, and I know many of our partners are actually here in the audience today, and I wanna say we're on track to launch in the second half of this year. So now let's turn to data center GPUs and some big updates on our Instinct accelerators. We launched MI300 last December, and it's quickly become the fastest ramping product in AMD history. Microsoft, Meta, and Oracle have all adopted MI300. Every major server OEM is offering MI300 platforms, and we have built deep partnerships with a broad ecosystem of CSP and ODM partners. Again, many, many thanks to our ODM partners who are here today, that are offering Instinct solutions. Now, if you look at today's enterprise generative AI workloads, MI300x provides out-of-the-box support for all of the most common models, including GPT, Llama, Mistral, Phi, and many more.
We've made so much progress in the last year on our ROCm software stack, working very closely with the open-source community at every layer of the stack, while adding new features and functionality that make it incredibly easy for customers to deploy AMD Instinct in their software environment. Over the last six months, we've added support for more AMD AI hardware and operating systems. We've integrated open-source libraries like vLLM and frameworks like JAX. We've enabled support for state-of-the-art attention algorithms. We've improved computation and communication libraries, all of which have contributed to significant increases in the gen AI performance for MI300. Now, with all of these latest ROCm updates, MI300x delivers significantly better inferencing performance compared to the competition on some of the industry's most demanding and popular models.
That is, we are 1.3 times more performance on Meta's latest Llama 3 70B model compared to H100, and we're 1.2 times more performance on Mistral's 7B model. We've also expanded our work with the open-source AI community. More than 700,000 Hugging Face models now run out of the box using ROCm on MI300X. This is a direct result of all of our investments in development and test environments that ensure a broad range of models work on Instinct. The industry is also making significant progress at raising the level of abstraction at which developers code to GPUs. We wanna do this 'cause people want choice in the industry, and we're really happy to say that we've made significant progress with our partners to enable this.
For example, our close collaboration with OpenAI is ensuring full support of MI300x with Triton, providing a vendor-agnostic option to rapidly develop highly performant LLM kernels. And we've also continued to make excellent progress adding support for AMD AI hardware into the leading frameworks like PyTorch, TensorFlow, and JAX. Now, we're also working very closely with the leading AI developers to optimize their models for MI300. So I'm very excited to welcome Christian Laforte, CTO and co-CEO of Stability AI, an important AMD partner known for delivering the breakthrough Stable Diffusion open-access AI models. Hello, Christian. How are you?
Thank you. I'm great. It's a great honor to be here to represent my colleagues and everyone else who makes Stability AI a really interesting player in the ecosystem.
Well, you know, we've showed a lot of Stability AI today. You're known for delivering these breakthrough open-access AI models that generate, you know, images, video, language, code, all of these things. Can you share some insights into how these models are pushing the boundaries of what's possible?
Yes. We're seeing incredible gains in productivity in every industry, and many were made possible because we did a crazy thing, and we released our models and our source code for free. And so this allowed millions of developers and enthusiasts and thousands of researchers to adapt their models to make new discoveries at record pace and to create new applications extremely fast.... Take, for instance, touching up old family photos to improve their resolution or quality, or maybe to remove someone you never, ever want to see again in your whole life. Doing this well used to take years of experience and sometimes hours of tedious work for each image. Now, applications like Stable Assistant and Stable Artisan, and a lot of other applications that leverage stable diffusion, can allow anyone to create and edit images in seconds.
We're seeing similar gains in productivity, not just in images, but in other research areas that we're involved in, in language, coding, music, speech, and 3D. Combining all of those together, we aim to soon boost by at least 10x the productivity of filmmaking and video game creation.
That's fantastic, Christian. Now, I understand you have some big news to tell the audience today.
Yes. So basically, the wait for Stable Diffusion 3 is almost over. We appreciate the community's patience and understanding as we dedicated extra effort to improve its quality and safety. Today, we're announcing that on June twelfth, we will release the Stable Diffusion 3 Medium model for everyone to download. A lot of work went into this, and we're really excited to see what the community will end up doing with this. One thing that is maybe not obvious to non-technical people is that, like, it used to be that the frontier of research led to basically, like, these models, like Stable Diffusion. But nowadays, what's happening is that there's like a new... It's like a natural evolution, basically.
Like, these models are getting combined together in all kinds of novel ways, and by releasing them openly, we basically allow millions of people to help discover the best way to bring those together and unlock new use cases. So, this SD 3 medium, it's an optimized version of SD 3 that achieves unprecedented visual quality, and that the community will be able to improve for their own specific needs to help us discover the next frontier of generative AI. It will, of course, run super fast on the MI300, and it's also compact enough to run on the Ryzen AI laptops that you've just announced. So here's an image produced with Stable Diffusion 3. We challenged it to basically illustrate the famous Taiwan night markets, what it looks like.
It looks very nice, Christian.
Yes, thank you.
It looks very nice.
If you look really, really closely, you'll notice, like, it's not quite photorealistic, but I think it captured really well, like, the different elements of the text prompt. It's especially impressive when you think that it was generated so much faster than actually typing this long text prompt. So it captured the pedestrians that are walking, the street that is made of stones, the fact that it is during the night, there's trees and so on. Basically, like, SD 3 is able to do this using a bunch of new innovations, including the multimodal diffusion transformer architecture, that allow it to understand visual concepts and text prompts far better than previous models.
It supports both simple prompts, so you don't need to become an expert at these, but you can also use much more complex ones, and it will try to bring together all of the different elements of it. SD 3 excels at all kinds of artistic styles and photorealism. Here's an example that is actually a really challenging example, and we're comparing it with our previous version model, Stable Diffusion XL, that we released less than a year ago. It's especially challenging because it involves hands that are notoriously hard for these models to replicate. It involves these repeating patterns, like the strings on the guitar and the frets. These are all really, really challenging for these models to understand and draw accurately.
So notice how SD 3 generated more realistic details, like the shape of the guitar and the hands, and if you look really, really closely, you may notice that there are a few imperfections here and there. So it's not... It's still not quite perfect, but it's a big improvement over the previous generation.
No, it's, it's fantastic, Christian. And, you know, I know that, you know, your team's been working a lot on SD 3. What's your experience been like with MI300?
It's wonderful. 192 gigabytes of HBM, that's really a game changer. It's like, having more memory basically is often like, it's the way that we can basically unlock new models, and it's often, like, the number one factor that will help us to train bigger models faster and more efficiently. And I'll give an example, that we've actually just encountered in collaborating with AMD. So we have this creative upscaler feature in our API, and basically, the way it works is that, it can take an old photo, an old image that is less than 1 megapixel, and really blow out the resolution and improve the quality at the same time.
So this creative upscaler, like, we were happy when we were able to reach 30 megapixels on the H100, NVIDIA H100. But, like, once we basically ported our code over to the MI300, which by the way, was like, pretty much no effort, we were able to reach 100 megapixels. And, you know, content creators, they always want more pixels, so this makes a huge difference. And the fact that we didn't have to really make any effort to actually achieve this, like, it's a... Yeah, it's a big step up. So, researchers and engineers are really gonna love the incredible memory capacity and the bandwidth advantages that the AMD Instinct GPUs deliver out of the box.
So, Lisa, moving forward, we'd really love to collaborate more closely with AMD because we'd like to create a new a state-of-the-art video model. We need a lot more memory. We need a lot more compute to do this, and so we'd love to collaborate more closely with your team to achieve this and to release this for the whole world to enjoy.
That sounds fantastic. It sounds like you need some GPUs.
Yes. Thank you.
Thank you so much.
We're at the right place for this.
Thank you so much, Christian.
Thank you. Have a great, a great Computex.
... You can see all of the innovation that's happening in such a short amount of time. Now, earlier in the show, I was joined by Microsoft's Pavan Davuluri, who shared the great work that we're doing together on Copilot Plus PCs. Microsoft is also one of our most strategic data center partners, and we've been working very closely with them on our EPYC and Instinct roadmap. To hear more about our partnership and how Microsoft is using MI300X across their infrastructure, here's Microsoft Chairman and CEO, Satya Nadella.
Thank you so much, Lisa. It's great to be with all of you at Computex. We're in the midst of a massive AI platform shift with the promise to transform how we live and work. We are committed to partnering broadly across the industry to make this vision real. That's why our deep partnership with AMD, which has spanned multiple computing platforms, from the PC to custom silicon for Xbox, and now to AI, is so important to us. As Pavan highlighted, we are excited to partner with you to deliver these new Ryzen AI-powered Copilot Plus PCs, and we're also thrilled to announce last month that we were the first cloud to deliver general availability of virtual machines using AMD's MI300X accelerator. It's a massive milestone for both our companies, and it gives our customers access to very impressive performance and efficiency for their most demanding AI workloads.
In fact, it offers today the leading price performance for GPT workloads. This is just the start. We are very committed to our collaboration with AMD, and we'll continue to push AI progress forward together across the cloud and edge to bring new value to our joint customers. Thank you all so very much.
Thank you so much, Satya. We're so proud of our work with Microsoft. And as you heard from Satya, MI300 delivers the best price performance today for GPT-4 workloads and is being deployed broadly across Microsoft's AI compute infrastructure. Now, let me show you one more example of MI300 being used to power OpenAI's Wanderlust travel assistant, built on GPT-4. So again, let's start by letting the tool know that we're interested in Taiwan and that we're gonna be attending Computex. And you might ask something about, you know, who's the opening keynote? But...
Yeah!
Got it right. Now, let's also ask Wanderlust what other interesting sites should we see in Taipei? And you can see almost instantly, we get lots of options of things to do near the convention center. But if we wanna narrow it down to just a few, 'cause we only have a day, we can ask it to plan a day for us, and that day would include things like Elephant Mountain and Taipei 101, and you get the full itinerary. I think it just gives you an example of the power of AI. I mean, Wanderlust on MI300 looks wonderful, but it really shows the power of these assistive agents and how easily it is for developers to seamlessly integrate gen AI models into their applications so that we can make AI extremely helpful for all of us.
Now, the customer response to MI300 has been overwhelmingly positive, and it's just so clear that the demand for AI is just accelerating so much going forward. We're really just at the beginning of a decade-long mega cycle for AI, and to address this incredible demand, I have a very exciting roadmap to show you. We launched MI300X last year with leadership inference performance, memory size, and compute capabilities, and we have now expanded our roadmap, so it's on an annual cadence. That means a new product family every year. Later this year, we plan to launch MI325X with faster and more memory, followed by our MI350 series in 2025, that will use our new CDNA 4 architecture. And both the MI325X and MI350 series will leverage the same industry-standard universal baseboard OCP server design used by MI300.
And what that means is that our customers can very quickly adopt this new technology. And then in 2026, we'll deliver another brand-new architecture with CDNA Next in the MI400 series. So let me show you a little bit, starting with MI325. MI325 extends our leadership in generative AI with up to 288 GB of ultra-fast HBM3E memory, with 6 TB/s of memory bandwidth, and it uses the same infrastructure as MI300, which makes it easy for customers to transition. Now, let me show you some competitive data. Compared to the competition, MI325 offers twice the memory, 1.3 times faster memory bandwidth, and 1.3 times more peak compute performance.
Based on this larger memory capacity, you heard what Christian said about the importance of memory, a single server with 8 MI325 accelerators can run advanced models up to 1 trillion parameters. That's double the size supported by an H200 server. Then, moving into 2025, we'll introduce our CDNA 4 architecture, which will deliver the biggest generational leap in AI performance in our history. The MI350 series will be built with advanced 3-nanometer process technology and supports for FP4 and FP6 data types, and will again drop into the same infrastructure as MI300 and MI325. We are super excited about the AI performance of CDNA 4. So if you just take a look at that history, when we launched CDNA 3, we were 8 times more AI performance compared to our prior generation, and with CDNA 4, we're on track to deliver 35x increase.
That's 35 times increase in performance compared to CDNA-3, and when you compare MI350 series to B200, Instinct supports up to 1.5 times more memory and delivers 1.2 times more performance overall. Thank you. We are very excited about our multi-year Instinct and ROCm roadmaps, and I can't wait to bring all of this new performance to our AI customers. Now, I have one more topic I'd like to talk about today, and it's really in addition to our focus on Instinct, we've also made significant progress driving the development of high-performance AI networking infrastructure systems. AI network fabrics need to support fast switching rates with very low latency, and they must scale to connect thousands of accelerator nodes. At AMD, we believe that the future of AI networking must be open.
Open to allow everyone in the industry to innovate and drive the best solutions together. So for both inferencing and training, it's actually critical to scale up the performance of hundreds of accelerators, connecting the GPUs in a rack or pod with an incredibly fast, highly resilient interconnect, so they can work as a single compute node to run the largest models with the fastest responses. Last week, I'm very happy to say that many of the largest chip, cloud, and systems companies came together to announce plans to develop an open standard for a high-performance fabric that can officially connect hundreds of accelerators. We call this Ultra Accelerator Link, or UA Link, and it's an optimized load store fabric designed to run at high data rates and leverages AMD's proven Infinity Fabric technology.
We actually believe UA Link will be the best solution for scaling accelerators of all types, not just GPUs, but all accelerators, and will be a great alternative to proprietary options. The UA Link 1.0 standard is on track for later this year, with chips supporting UA Link already well into development from multiple vendors. And now the other part of training large models is also the need for scale-out performance, connecting multiple accelerator pods to work together with at-scale installations, often spanning hundreds of thousands of GPUs. A broad group of industry leaders formed the Ultra Ethernet Consortium last year to address this challenge, and UEC is the high-performance technology with leading signaling rates. It has extensions such as RoCE for RDMA to efficiently move data between nodes, and it has a new set of innovations developed specifically for AI supercomputers. It's incredibly scalable.
It offers the latest switching technology from leading vendors such as Broadcom, Cisco, and Marvell, and above all, it's open. Open means that as an industry, we can innovate on top of UEC and solve the needed problems, and the industry can work together to build out the best possible high-performance interconnect for AI and HPC. So when you look ahead, what does this mean? That means we have all the pieces. We have UA Link and Ultra Ethernet, and now we have the complete networking solution for highly performant, highly interoperable, and highly resilient AI data centers that can run the most advanced frontier models. So I hope you can see now that AMD is the only company that can deliver the full set of CPU, GPU, and networking solutions to address all of the needs of the modern data center.
We have accelerated our roadmaps to deliver even more innovation across both our Instinct and EPYC portfolios, while also working with an open ecosystem of other leaders to deliver industry-leading networking solutions. Now, it's been a wonderful morning. We have so much that we talked about, so let me just wrap things up. We showed you a lot of new products today, from our latest Ryzen 9000 desktops and third-gen Ryzen AI notebook processors with leadership compute and AI performance, to our single-chip Versal Gen 2 series that will bring more AI capabilities to the edge, to our next generation Turin processors that extend the leadership and efficiency of our EPYC portfolio, and our expanded set of Instinct accelerators that will deliver an annual cadence of higher performance. What I can say is this is an incredible time to be in the technology industry.
It's an incredible pace of innovation, and I couldn't be more excited about all of the work that we're gonna do together in high performance and AI computing as an industry. So a very, very special thank you to all of our partners who joined us today, Microsoft, HP, Asus, Lenovo, and Stability.ai, and especially thank you for all of our partners here in Taiwan and around the world. Thank you for being such a great audience, and have a great Computex 2024. Thank you.