Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.39
+0.85 (1.03%)
Apr 27, 2026, 12:26 PM EDT - Market open
← View all transcripts

COMPUTEX Taipei 2024

Jun 4, 2024

Operator

Or expectations are forward-looking statements. These statements are based on current expectations and involve many risks and uncertainties that could cause the actual results to differ materially from those expressed or implied in such statements. For more information on factors that could cause actual results to differ materially, see our most recent earnings release and SEC filings at www.intc.com.

Jason Chen
Chairman and CEO, Acer

Please welcome Grammy-nominated percussion group, Ten Drum. By bringing AI everywhere-

Speaker 9

Whatever has been done can be outdone, and in ways we haven't imagined yet.

Operator

CEO of Intel Corporation, Pat Gelsinger.

Pat Gelsinger
CEO, Intel

Thank you. Thank you. [Foreign language] . Great to be here in Taipei. You know, I love the central role that Taiwan plays in the entire tech ecosystem. I just feed on the innovative energy of the Taiwan ecosystem. I just love the fact that we have Computex here, and great for the opportunity for Intel to be part of this incredible event for the entire tech ecosystem. So let me again say thank you for having us at Computex Taiwan 2024. You just heard a famous quote from Gordon Moore in the video. And if you looked at that, we actually, with the permission of his family and the magic of AI, we trained a model on Gordon Moore in his seventies and brought that back to life.

This famous quote, "Whatever has been done, can be outdone." And isn't that the very essence of Computex? This event isn't just about what's happening today, it's about what's happening next. And the things we'll talk about today will shape our future for years to come. You know, and, you know, as we've seen time and again, and at the first Computex, which by the way, do you know when the first Computex was? 1981, and we were working on introducing the next year, right, a technological marvel called the 80286. Yeah! A 100,000-transistor chip, and that was a big deal at the time. Today? Hmm. It's a relic, one of those many relics that I worked on in my career, right?

As we look now to have a billion transistors on a single chip, and even looking to a trillion transistors in a single package by the end of the decade. Unlike what Jensen might have you believe, Moore's Law is alive and well... and Taiwan continues to play a central role. You know, Intel has had its operations in Taiwan since 1985, and next year at Computex, we're gonna have a little 40-year birthday party for Intel and Taiwan. IT, Intel and Taiwan together. I wanna start with just a big thank you to all of our partners here. Together, we're changing the world again. Let's hear.

Speaker 9

Intel has been our vital partner.

Together, we are powering AI at the edge.

We're breaking the barriers between people and AI. Exploring incredible possibilities. Our partnership with Intel fuels collective innovation to solve hard problems. Unleashing creators' potential. Enabling business transformation.

From PC to data center, we are building a more resilient supply chain for the semiconductor in the AI era.

We're creating AI solutions that drive global progress.

For the betterment of our society. To ethically care for everyone.

We will achieve breakthroughs that will change the world.

Pat Gelsinger
CEO, Intel

What a great group! I'm just very grateful for the partnership now for many decades. Some of you were younger when we first got to know each other. Our work together is giving rise to the AI era. You know, and when I think about this AI era, I consider it like the internet 25 years ago. It's that big. Every device will become an AI device. Every company will become an AI company, and we see this as the fuel that's driving the semiconductor industry to reach $1 trillion by the end of the decade. You know, Intel has a unique position in this, as the only company that gets to service 100% of the AI TAM, an AI continuum from semiconductors through products, and that starts with the Intel Foundry.

Chips are the heart of the global economy, and the world needs more flexible and resilient supply chains. This has given rise to us creating the world's first System Foundry for the AI era. Our foundry business will also drive the next generation of your innovation, but also the Intel products. Today, I'm gonna focus our attention on the Intel products portfolio and how it is enabling AI everywhere. As we think about this, it's the most consequential era, potentially, of the extraordinary careers that I and we have been able to have. You know, when the internet arrived, do you remember, you know, your first Netscape experience? It's like, "Yeah, man, this morning I got on the internet. Wow!" Okay. Now, you're on the internet every minute. Put your phone away. Listen to me, okay? Get off it, right? You know, right.

You know, and if you're a teenager, you're on it, like, every 10 seconds. It's just become pervasive in everything that we do, and AI everywhere will push the boundaries of what's possible across every human experience. You know, what we do in the data center and cloud, what we do in the edge, what we do in the PC, and everywhere in between. And with open standards, security, sustainability at the center. So let's start with the data center. AI is, you know, has been central to the data center, and what Intel is doing has been the key to enabling the cloud data center for decades. We have a stunning 130 million Xeons powering data centers around the world, and this installed base is a huge advantage and a huge opportunity for us collectively. Our customers need infrastructure that's scalable and flexible.

They need standard platforms to integrate with existing systems, to run those decades of software, right? Those exabytes of data that they have in place. They need an open ecosystem to maximize choice and value. Of course, they also need more compute and performance, greater density, greater energy efficiency, greater server capacity. This creates a set of issues, issues that they have to deal with as they have more and more demands for those power and energy solutions. Today, we're launching the solution for this next era. Today, we're launching Xeon 6 with E-cores. We see this as an essential upgrade for the modern data center, a high core count, high density, exceptional performance per watt.

You know, it's also important to note that this is our first product on Intel 3, and Intel 3 is the third of our 5 nodes in four years as we continue our march back to process technology, competitiveness, and leadership next year. This enables us to have the choice for high-density scale-out workloads for the future. In simple terms, performance up, power down, and drastically smaller footprint. So let's just give you some visual demonstration of what this means. You know, with that, Chuck, if you could come on here, and what do you have here, Chuck?

Speaker 7

Well, Pat, I've got two racks here. Now, in this first rack, it is full of Xeon second gen scalable processors. So we've got 20 of them in here, all loaded, ready to go. Over here

... Oh, it's basically empty.

Pat Gelsinger
CEO, Intel

Okay, so why don't you do this, Chuck, and I know you didn't get enough workout time in today. I'd like you to fill this rack, right, with the equivalent compute capability of the Gen 2 using Gen 6. Okay?

Speaker 7

Give me a minute or two. I'll make it happen.

Pat Gelsinger
CEO, Intel

Okay, get with it. Come on. Hop to it, buddy. You know, it's important to think about, you know, the data centers. You know, every data center provider I know today is being crushed by how they upgrade, how they expand their footprint, and the space, the flexibility. You know, for high performance computing, they have more demands for AI in the data center. Having a processor with 144 cores versus 28 cores for Gen 2 gives them the ability to both condense as well as to attack these new workloads as well, with performance and efficiency that was never seen before. So Chuck, are you done?

Speaker 7

I'm done! I wanted a few more reps, but you said equivalent. I could even put a little bit more.

Pat Gelsinger
CEO, Intel

Okay, so let me get it. That rack has become this.

Speaker 7

It has become that.

Pat Gelsinger
CEO, Intel

Now, there's a lot of leftover space here, Chuck.

Speaker 7

There is. I mean, if you're gonna reimagine the data center, right, you want the best power and efficiency possible, but there's all those new workloads you wanna run, so we left some extra space for those.

Pat Gelsinger
CEO, Intel

Well, okay, so we showed it space-wise, but can you give us a demonstration of what this looks like?

Speaker 7

Absolutely. Now, depending on who you listen to, the data, the internet is roughly 60%-80% media.

Pat Gelsinger
CEO, Intel

Yeah, incredible amount of the bandwidth, incredible amount of the workload is media workload.

Speaker 7

Exactly. So in the lab right now, I've got one second gen, and I've got one of the brand new Xeon 6s. You can see them up on screen, and they are transcoding like crazy, right?

Pat Gelsinger
CEO, Intel

Okay.

Speaker 7

It's hard to see, though, right?

Pat Gelsinger
CEO, Intel

Yeah.

Speaker 7

Okay, so let's do a visualization of that in real time. Now, the Xeon second gen, it is doing 56 videos at one time. Xeon 6, however, 144 videos at one time and doing them much, much faster. So you can see the frames per second, each of those, 628 on the second gen. Almost you know, over 2,600 on the Xeon 6. I mean, a 4.2x or up to 4.2x performance gain on Xeon 6.

Pat Gelsinger
CEO, Intel

So this is pretty incredible. What does it mean for every data center? Time to upgrade.

Speaker 7

Exactly.

Pat Gelsinger
CEO, Intel

-to Gen 6.

Speaker 7

Exactly.

Pat Gelsinger
CEO, Intel

Thank you so much, Chuck.

Speaker 7

Thanks, Pat.

Pat Gelsinger
CEO, Intel

You know, and what you just saw was E-cores delivering this distinct advantage for cloud native and hyperscale workloads, 4.2x in media transcode, 2.6x performance per watt. From a sustainability perspective, this is just game changing. You know, a three-to-one rack consolidation over a four-year cycle. Just one 200-rack data center would save 80,000 megawatt-hours of energy, and Xeon is everywhere. So imagine the benefits that this could have across the thousands and tens of thousands of data centers. In fact, if just 500 data centers were upgraded with what we just saw, this would power almost 1.4 million Taiwan households for a year, 3.7 million cars off the road for a year, or power Taipei 101 for 500 years.

By the way, this will only get better. And you know, if 144 cores is good, well, let's put two of them together, and let's have 288 cores. So later this year, we'll be bringing the second generation of our Xeon 6 with E-cores, a whopping 288 cores, and this will enable a stunning six-to-one consolidation ratio. Better claim than anything we've seen in the industry. And you can see why Xeon is gaining momentum in the marketplace. And we're hearing from our customers as well, and for instance, eBay. You know, and they were already seeing 25% performance per watt improvement over competitive solutions, greater than 90% performance per watt improvements over their third-gen installed base. Let's hear from another customer, you know, partner and key infrastructure software provider, SAP.

Speaker 8

At SAP, we are passionate about building a greener future. Together with Intel, we are crafting solutions that care for our planet. Our collaboration with Intel extends to enhancing the energy efficiency of modern data centers. For SAP HANA Cloud, it is key to provide industry-leading performance and scalability. Intel's Xeon 6 CPU with efficient cores is an important and effortless step for SAP HANA Cloud. To optimize performance in tests on SAP HANA and SAP HANA Cloud, we were able to achieve a similar performance and scalability while reducing the power consumption by up to 60%. Thank you, Intel, for a strong collaboration and trusted partnership over the many years.

Pat Gelsinger
CEO, Intel

Thank you to our friends at SAP, and look at the ecosystem momentum. Simply put, performance up, power down, and that's why Xeon is driving strong adoption across the ecosystem, and more to come. In Q3, we'll introduce the E-core's big brother, the P-core version, Granite Rapids, our next generation of the Xeon 6 family of processors. Xeon, back on its way in a big and powerful way for the industry and the ecosystem. Thank you very much. More to come. Clearly, the conversation is about AI and how these next-generation workloads evolve. Not only can Xeon be part of today's cloud workloads, but critically, your databases already run on Xeon. Increasingly, what we're seeing is that the LLMs are being complemented by real-time database environments or RAG, retrieval-augmented generation.

When you think about it. Here we are in year 22 or 23 of cloud computing, and today, over 60% of workloads now run in the cloud, but over 80% of the data remains on-prem. Wow! You know, this extraordinary amount of data that's unmonetized and unleveraged by businesses. Xeon plus RAG changes that. RAG becomes, in our view, one of the most important enterprise workflows, combining your data and databases, many of those real-time time series data, you know, with LLMs. LLMs may be trained on data that's a month, a year, or years old, but combining that with real-time time series embeddings makes an extraordinary combination. And not only is it powerful Xeons, like Xeon 6, but it's also being complemented by our AI accelerators, turbocharging that with Gaudi.

Customers are looking for high performance, cost-effective Gen AI training and inferencing solutions, and they've started to turn to alternatives like Gaudi. You know, they want choice. They want open, open software and hardware solutions, and time to market solutions at dramatically lower TCOs. And that's why we're seeing customers like Naver, Airtel, Bosch, Infosys, and Seekr turning to Gaudi 2, and we're putting these pieces together. We're standardizing through the open source community and the Linux Foundation. We've created the open platform for enterprise AI to make Xeon and Gaudi a standardized AI solution for workloads like RAG. Let's take a look. Diana, what do we have here?

Speaker 7

All right, so I thought we'd do a quick demo of that AI appliance that you mentioned.

Pat Gelsinger
CEO, Intel

Okay.

Speaker 7

So this is running on Xeon and Gaudi 2 with components from OPEA, such as the Redis Vector Database.

Pat Gelsinger
CEO, Intel

Okay. Yeah.

Speaker 7

We have here an example that's RAG plus LLM. I should mention that the LLM we're using is based on LLaVA, which is a multimodal LLM, so that will come into play as we do this demo. Let me start with maybe a quick medical query.

Pat Gelsinger
CEO, Intel

Okay, so this is Xeon and Gaudi-

Speaker 7

Yep

Pat Gelsinger
CEO, Intel

working together on a medical query. So it's a lot of private, confidential, on-prem data-

Speaker 7

Correct

Pat Gelsinger
CEO, Intel

-being combined with an open-source LLM.

Speaker 7

Exactly.

Pat Gelsinger
CEO, Intel

Okay.

Speaker 7

Exactly.

Pat Gelsinger
CEO, Intel

Very cool.

Speaker 7

All right, so let's see what our LLM has to say. So you can see, like a typical LLM, we're getting the text answer here, standard, but it's a multimodal LLM, so we also have this great visual here of the chest X-ray.

Pat Gelsinger
CEO, Intel

Okay, I'm not good at reading X-rays, so what does this say?

Speaker 7

I, I'm not great either, but the nice thing about this—I'm gonna spare you my typing skills. I'm gonna do a little cut and pasting here. The nice thing about this multimodal LLM is we can actually ask it questions to further illustrate what's going on here. So this LLM is actually going to analyze this image and tell us a little bit more about this hazy opacity, such as it is. So you can see here—

Pat Gelsinger
CEO, Intel

Okay

Speaker 7

It's saying it's down here in the lower left. So once again, just a great example of multimodal LLM.

Pat Gelsinger
CEO, Intel

Real time, this is how healthcare providers could be taking advantage of AI solutions today with RAG.

Speaker 7

Exactly.

Pat Gelsinger
CEO, Intel

Okay.

Speaker 7

Exactly.

Pat Gelsinger
CEO, Intel

What else do you got for us?

Speaker 7

Okay, well, you know, I mean, RAGs, of course, as you said, are great for, you know, up-to-the-minute information, also, you know, confidential-

Pat Gelsinger
CEO, Intel

Mm-hmm

Speaker 7

-on-prem data.

Pat Gelsinger
CEO, Intel

Okay.

Speaker 7

I know this RAG is super up-to-date, so I thought maybe I would ask it a question I've been wondering about recently.

Pat Gelsinger
CEO, Intel

Okay, what have you got?

Speaker 7

Okay, so it's actually, it has to do with the prices of the Gaudi 2 and Gaudi 3 kits. I figured I'd just ask that-

Pat Gelsinger
CEO, Intel

But those aren't publicly disclosed.

Speaker 7

I thought I'd just ask... What?

Pat Gelsinger
CEO, Intel

Those aren't publicly available information.

Speaker 7

Oh. Um.

Pat Gelsinger
CEO, Intel

Well,

Speaker 7

Okay

Pat Gelsinger
CEO, Intel

I guess they are now.

Speaker 7

They might be now.

Pat Gelsinger
CEO, Intel

Right. Did you check with our PR department before you did this?

Speaker 7

It just seemed like a great question-

Pat Gelsinger
CEO, Intel

Oh, well, okay.

Speaker 7

for the demo.

Pat Gelsinger
CEO, Intel

Well, you know, so the price of Gaudi 2, Gaudi 3 AI kits-

Speaker 7

Yeah

Pat Gelsinger
CEO, Intel

look pretty compelling.

Speaker 7

They do, don't they?

Pat Gelsinger
CEO, Intel

Yeah.

Speaker 7

Great.

Pat Gelsinger
CEO, Intel

You know, but this chatbot just broke some news.

Speaker 7

Mm-hmm.

Pat Gelsinger
CEO, Intel

So let's go a little bit further.

Speaker 7

Okay.

Pat Gelsinger
CEO, Intel

How does it compare with competition?

Speaker 7

All right. Let's take a look and see. I'm thinking the chatbot will know this as well. Seems to be pretty up-to-date on everything. So let's take a look and see how we do with competition. Oh, look, it's got an answer for that as well.

Pat Gelsinger
CEO, Intel

Okay.

Speaker 7

Pretty compelling.

Pat Gelsinger
CEO, Intel

In other words, it crushes the competition.

Speaker 7

It looks like it. It's looking great.

Pat Gelsinger
CEO, Intel

Well, I'm really starting to like this chatbot-

Speaker 7

Yeah

Pat Gelsinger
CEO, Intel

Thank you, Diana, for an incredible demonstration of real-time database.

Speaker 7

Mm-hmm. Sure. Thanks.

Pat Gelsinger
CEO, Intel

As you see, you know, Gaudi is not just winning on price, it's also delivering incredible TCO and incredible performance, and that performance is only getting better with Gaudi 3. Gaudi 3 architecture is the only MLPerf benchmark alternative to H100 for LLM training and inferencing, and Gaudi 3 only makes us stronger. You know, we're projected to deliver 40% faster time to train than H100, and 1.5x versus H200, and, you know, faster inferencing than H100, and delivering that 2.3x performance per dollar and throughput versus H100. In training, you know, Gaudi 3 is expected to deliver 2x the performance per dollar. You know, and this idea is simply music to our customers' ears: spend less and get more.

It's highly scalable, uses open industry standards like Ethernet, which we'll talk more about in a second, and we're also supporting all of the expected open source frameworks like PyTorch, vLLM, you know, and hundreds of thousands of models are now available on Hugging Face for Gaudi. And with our developer cloud, you can experience Gaudi capabilities firsthand, easily accessible, and readily available... But of course, with this, the entire ecosystem is lining up behind Gaudi 3, and it's my pleasure today to show you the wall of Gaudi 3. You know, and together with our partners, you know, we're thrilled with the momentum and the mass market opportunities that Gaudi 3 is bringing forward to our customers, because they want choice, they want TCO, they want alternatives, they want performance, and Gaudi 3, and our partner ecosystem, is delivering exactly that.

So with that, I wanna take a moment and actually have one of our partners here join us on stage. Inventec, a long-time Intel partner, using Gaudi to deliver AI compute capabilities across diverse sectors of the enterprise. Please join me in welcoming to stage my friend and yours, Inventec President, Jack Tsai.

Jack Tsai
President, Inventec

Hi, Pat. Good to see you.

Pat Gelsinger
CEO, Intel

Good to see you, Jack. So, so good. So, and thanks for being here, joining us on stage here. So, you know, can you tell us about what the work that we're doing that excites you most, Jack?

Jack Tsai
President, Inventec

Okay, I clearly remember when, years ago, you announced about democratize AI a few years ago. Since that, I think we have the same feeling about your vision, so we have a team to work on the AI accelerators. And right now, we have a team to work on bringing up the focus on Gaudi 3, to bring that into the market this year. So I think together with Intel, we will fuel gen AI adoption at scale and democratize the needs.

Pat Gelsinger
CEO, Intel

Well, but creating these AI solutions is pretty hard work, right? There's a lot of technology that goes into it. And what are you hearing from our customers about their greatest challenges and needs?

Jack Tsai
President, Inventec

I think right now, customers are dealing a lot of complexity, so they don't just need a computer powerful as accelerators, and they also need a scalable and flexible architecture, or services with high bandwidth interconnects for the memory and the networking to ensure efficiency, data movement, and also avoid the bottlenecks. The effective power management and innovative cooling solutions are equally important. The last one is on top of that, they need an optimized software stack that can seamlessly-

Pat Gelsinger
CEO, Intel

Mm-hmm

Jack Tsai
President, Inventec

... integrate with widely adopted AI frameworks like PyTorch or TensorFlow. With the Gaudi solution we are bringing to the market, we meet all the requirements.

Pat Gelsinger
CEO, Intel

Well, absolutely, and we're hearing the same from our customer conversations. Can you tell me a little bit more about how Gaudi 3 is such an important step forward for you?

Jack Tsai
President, Inventec

I think it starts with the performance advantage over the prior generation. Superior hardware spec and architecture, a very strong option for enterprise CSP and also AI developers. As you mentioned, Gaudi also delivered a terrific cost performance, efficiency, value proposition for the AI training and inference. And one of the big, biggest advantage is it built on industry standard solution. Computer resource demand will drive substantial growth in the AI-as-a-service market, and the solution like Gaudi, that allow customer to easily scale out and scale up. This is super important.

Pat Gelsinger
CEO, Intel

Well, you know, I'm getting pretty excited for delivering our Gaudi 3 solutions together in the marketplace later this year. Thank you for joining us here at Computex, and most important, for our Gaudi 3 solutions. Thank you, Jack.

Jack Tsai
President, Inventec

Thank you. Thank you, thank you.

Pat Gelsinger
CEO, Intel

Now, everything that we just talked about, it needs to be stitched together, and the fabric is really the heart. As I said, you know, the network is the computer, and, you know, the underlying network infrastructure for AI systems is absolutely essential. Customers are asking for open technology. They don't want proprietary islands in their data center for their AI solutions. Through the Ultra Ethernet Consortium, Intel, working with a broad ecosystem of players, is introducing an AI-optimized, scale-out, Ethernet-based, you know, the venerable Ethernet standard, and we're delivering that both as a network interface card as well as foundry chiplets, and we're working to have our IPUs available for enterprise support. This includes people like Microsoft, Oracle, Google, all partnering with us, and partners like Red Hat, making that available.

You know, that deals with the scale-out networking requirements, but also, just last week, we announced the creation of the Ultra Accelerator Link for scale-up as well. A new industry standard, based, again, on Ethernet-based approaches, to enable advanced, high-speed, low-latency communications for scale-up AI communication in the data center. So critical is establishing the open standards, not just at the software level, but at the networking level. And we're, we're driving this with, with the industry to enable scale-up and scale-out networking. So that addresses the needs of the data center, of the cloud environments, but we're also enabling choice for enterprises across the edge use cases as well. And we see AI in the edge as being a critical, explosive use case. You know, Intel has a strong foundation in the edge space... We have 90,000 Edge deployments, 200 million CPUs over the past decade.

Together, you know, we've talked about things like IoT and how they're gonna revolutionize the edge, and, you know, they've opened up more use cases for the edge and deployments, but not dramatically. We see AI as a game changer for the edge. AI video telemetry, exploding new value propositions for the edge, and expectations are greater than 50% of edge deployments are expected to run AI as the primary workload by 2026. The best part about this is, this is the heartland of that innovation here in Taiwan and the Taiwan ecosystem. For that, we recently launched the Intel Tiber Edge software platform, making it easy for enterprises to deploy, secure, and manage AI applications at the edge, and we're tailoring that for a variety of key vertical use cases.

It's just one example of the many vertical use cases that we see is in healthcare. With that, you know, we're harnessing AI to better deliver better patient outcomes. A great example of that is Samsung Medison and their ultrasound solution. Together, we're putting AI into the hands of doctors to capture images that are easier and faster than ever. We're doing this with Core Ultra and combining that with OpenVINO, and we're enabling doctors to capture 10 ultrasound cross-sections of a baby's heart in real time, delivering 20% increase in AI throughput, performance, and frame rates. For those of you here at Computex, check it out in our booth and that of Samsung for this and a ton of other applications that you'll get to see this week.

And again, to me, this is the heartland of the ecosystem for Edge use cases right here in Taiwan, and I see a tremendous set of opportunities for us together to bring these Edge solutions to the marketplace. But beyond the Edge, we see an even bigger opportunity, and that is in PCs. And when I think about the PC market, you know, this is the most exciting moment in 25 years. Since about 25 years ago was Wi-Fi. How many of you remember Wi-Fi? Yeah. I mean, and right, you know, we finished the Wi-Fi standard, and then what happened in the marketplace? Nothing. You know, in about two years after the standards were in place, Centrino was launched, and this unleashed energy in the ecosystem, and all of a sudden, every coffee shop had to have a hotspot, and every hotel room had to have it.

Our application for the PC moved from productivity to internet. Similarly, we see the AI PC being that Centrino-like moment. We expect that by 2028, 80% of all PCs will be AI PCs, and Intel is leading the way. Core Ultra is already providing AI capabilities. We have 300+ applications, 500+ AI models. We've already shipped eight million Core Ultra devices since our December launch last year, and we're working with the entire ecosystem to drive this capability into the marketplace as we enable the AI PC into the marketplace. The entire ecosystem is behind us, and you can see it right here. Ain't it a beautiful sight? You know, and here, you know, we're just thrilled by, you know, seeing these many new PCs, and I only have three of them that I'm carrying around these days.

You're seeing the next generation Core Ultra, you know, PCs. We already have about a third of these, our Lunar Lake PCs, which I'm about to talk about. We're at the forefront of this category creation moment, and we're proud of the partnership that we have with you, the key software providers across the industry, and our partnership with Microsoft. Let's hear right now from Satya Nadella.

Satya Nadella
CEO, Microsoft

Thank you, Pat. It's great to be with you at Computex today. We are entering a new era of AI, where for the first time, computers can understand us instead of us having to understand them. To bring this vision to life, we've introduced Copilot+ PCs, the fastest, most AI-ready Windows PCs ever built. And just like we have always done with Windows, we're taking a partner-first approach, working across our entire ecosystem to bring these new devices to life. That's why the partnership with Intel is so important to us. Lunar Lake processors will power more than 80 new Copilot+ PCs. They deliver exceptional security and extended battery life with a 40+ TOPS NPU, unlocking capabilities simply not possible on other PCs.

It's this innovation that will make breakthrough on-device AI experiences like Recall or Cocreator, as well as all the stuff that developers will build with our new Windows Copilot Runtime. We're looking forward to what customers around the world will achieve with these new capabilities. Thank you so much for the partnership.

Pat Gelsinger
CEO, Intel

Thank you, Satya. As we've launched the Core Ultra with Meteor Lake, it also introduced this next generation of chiplet-based design, and Lunar Lake is the next step forward, and I'm happy to announce it today. You know, Lunar Lake is a revolutionary design. It's new IP blocks for CPU, GPU, and NPU. It'll power the largest number of next-gen AI PCs in the industry. We already have over 80 designs with 20 OEMs that will start shipping in volume in Q3. You know, I also wanna say, particularly here in Taiwan, a special thanks to our friends at TSMC, which were critical to help us with many of the core technologies that were required to make Lunar Lake possible. This is a great example of the collaboration that we see in the foundry industry with Intel and TSMC, and enabling new standards like UCIe as well.

You know, so let's dig a little bit more into why Lunar Lake is such an important step, you know, for the industry, enabling this next generation of thin and light AI PCs. You know, first, it starts with a great CPU, and with that, this is our next generation Lion Cove processor that has significant IPC improvements and delivers that performance while also delivering dramatic power efficiency gains as well. So it's delivering Core Ultra performance at nearly half the power that we had in Meteor Lake, which was already a great chip. You know, the GPU is also a huge step forward. It's based on our next generation Xe 2 IP, and it delivers 50% more graphics performance. And literally, we've taken a discrete graphics card, and we've shoved it into this amazing chip called Lunar Lake.

Alongside this, we're delivering strong AI compute performance with our enhanced NPU, up to 48 TOPS of performance. As you heard Satya talk about our collaboration with Microsoft and Copilot+ , and along with 300 other ISVs, incredible software support, more applications than anyone else. Now, some say that the NPU is the only thing that you need, and simply put, that's not true. You know, and now, having engaged with hundreds of ISVs, most of them are taking advantage of CPU, GPU, and NPU performance. In fact, our new Xe2, you know, GPU is an incredible on-device AI performance engine. Only 30% of the ISVs we've engaged with are only using the NPU. The GPU and the CPU, in combination, deliver extraordinary performance. The GPU, 67 TOPS with our XMX performance, 3.5x the gains over prior generation.

Since there's been some talk about this other X Elite chip coming out and its superiority to the x86, I just wanna put that to bed right now. Ain't true. You know, Lunar Lake, running in our labs today, outperforms the X Elite on the CPU, on the GPU, and on AI performance, delivering a stunning 120 TOPS of total platform performance. And it's compatible, so you don't need any of those compatibility issues. You know, this is x86 at its finest. Every enterprise, every customer, every historical driver and capability simply works. This is a no-brainer. Everyone should upgrade. And, you know, the final nail in the coffin of this discussion is some say the x86 can't win on power efficiency. Lunar Lake busts this myth as well.

This radical new SoC architecture and design delivers unprecedented power efficiency, up to 40% lower SoC performance than Meteor Lake, which was already very good. So simply put, Lunar Lake, the flagship platform for AI PC innovation, bar none. Unmatched combination of performance, compatibility, application, software enabling, and power efficiency. We are committed to the AI PC. We're gonna drive it forward with our ISVs, OEMs, ecosystem partners, and most importantly, we're gonna deliver it in volume in the marketplace. We're partnering with our top OEMs to bring forward the power of Core Ultra and Lunar Lake, and let's talk to one of our partners right now. You know, we've enjoyed a long-term relationship with so many of you, and one of those that I enjoy the most has been ASUS, its chairman, Jonney Shih. Please, have you join me in welcoming him to the stage. Jonney?

Jonney Shih
Chairman, ASUS

Hi, Pat.

Pat Gelsinger
CEO, Intel

Jonney, it is just a pleasure to have you here, joining us, today. And I know you're really excited about the work that we're doing, you know, together in all areas of the data center and Gaudi, but most importantly for today, the AI PC. So tell us about it.

Jonney Shih
Chairman, ASUS

Yes, Pat, I'm indeed super excited about this unprecedented, you know, paradigm shift. For the first time in this industry, we can envision the future will be the era of ubiquitous AI. The world will be full of AI brains in different forms and sizes, including super, big, medium, small, and even tiny, like this in 1 billion parameters. From the cloud to the edge, to PCs and end devices like phones and robots. This is what has been keeping me awake at night.

Pat Gelsinger
CEO, Intel

Awake with excitement.

Jonney Shih
Chairman, ASUS

The incredible possibilities of AI. And with this unprecedented paradigm shift, AI PC plays a very critical role in this new distributed hybrid AI ecosystem. Imagine an AI PC with a small but intelligent brain that can act as a personal agent, who can understand and help you with your personal needs, preferences, and even work, while complementing the super brain in the cloud with local advantages of low latency, high security, and personalization, all the while uploading the cloud computing needs, especially for inferencing. And now, you have just unveiled your ultimate weapon, Lunar Lake, which indeed has an exceptional architecture that delivers unrivaled performance.

Pat Gelsinger
CEO, Intel

Yeah, this is really exciting. Your comments about, you know, what happens in the cloud, but being able to do it on my device, my PC, with my data?

Jonney Shih
Chairman, ASUS

Sure.

Pat Gelsinger
CEO, Intel

You know, you know, I just love this. How is ASUS leveraging Lunar Lake to further advance your AI capabilities?

Jonney Shih
Chairman, ASUS

Lunar Lake is a revolution, Pat. Its next, its next-gen architecture specs are incredible. By combining this processing power with our AI software suite, we will deliver unmatched performance and ease of use. This will empower creators, professionals, and students to leverage AI and take their work to the next level. And our long-term partnerships and collaboration enable us to go even further. Together, we can develop cutting-edge solutions that fully utilize the power of your 3X AI performance and small language model type of brain, taking advantage of the domain-specific or enterprise-specific or personalized database through our RAG support. Intelligently leveraging both the local brain and the cloud brain to provide the best experience that you can achieve by pure cloud-based solutions. I believe this will help to accelerate the paradigm shift to hybrid AI and truly realize the vision of the ubiquitous AI era.

Pat Gelsinger
CEO, Intel

Well, Jonney, you and I have been innovating together for decades now, and it truly is a pleasure to work with you, ASUS, for this incredible new opportunity-

Jonney Shih
Chairman, ASUS

Oh, it's my pleasure.

Pat Gelsinger
CEO, Intel

The AI PC category has to offer. Let's have a warm thank you from our audience and from Intel to Jonney.

Jonney Shih
Chairman, ASUS

Thank you. AI PC. Go! Go! Go. Thank you for your great partnership.

Pat Gelsinger
CEO, Intel

Thank you, Jonney.

Jonney Shih
Chairman, ASUS

Thank you. Yeah.

Pat Gelsinger
CEO, Intel

Thank you. So building on Meteor Lake and Lunar Lake for mobile, more to come. And Arrow Lake is our next-generation product we'll introduce later this year that will enable AI to all PC categories, starting with the desktop in Q4. And in 2025, it just gets better, you know, with Panther Lake. And Panther Lake on Intel 18A will accelerate and scale our position. And I am so excited about this product because this represents the culmination of so much of what we've been working on since I've been back at the company. You know, this is our 5th node in four years. In just the next week, we'll be powering on the first chips coming out on this wafer on 18A. 18A brings process leadership back to Intel, where it belongs, and it's been a historic space of process technology innovation and product innovation.

And rest assured, we're gonna have a lot to talk about at Computex 2025, because Panther Lake's gonna be coming to life in a powerful way. So I look forward to seeing you and all of our partners here for 18A, Panther Lake, and 2025. And, you know, given the incredible energy and innovation of the ecosystem here, I wanna bring out another one of our key partners. It's my pleasure; for more than four decades, Acer has been one of the leading IT companies in the world and a great partner spanning now 160 countries. I'm pleased to welcome Acer Chairman and CEO and longtime friend, Jason Chen. Jason?

Jason Chen
Chairman and CEO, Acer

Hey, hey, hey, Pat. How are you doing?

Pat Gelsinger
CEO, Intel

Jason, I am well. I am well. Anytime I'm in Taiwan, it's a good day.

Jason Chen
Chairman and CEO, Acer

I'm happy to hear that. IT, Intel, Taiwan, and somehow I keep hearing my company being called-

Pat Gelsinger
CEO, Intel

Yeah.

Jason Chen
Chairman and CEO, Acer

AI, Acer Inc.

Pat Gelsinger
CEO, Intel

Okay, so now we know what IT stands for and what AI stands for.

Jason Chen
Chairman and CEO, Acer

There you go.

Pat Gelsinger
CEO, Intel

Thanks for joining us, today.

Jason Chen
Chairman and CEO, Acer

Happy to be here.

Pat Gelsinger
CEO, Intel

Tell me more about what response you're seeing from Core Ultra in the market and with your customers?

Jason Chen
Chairman and CEO, Acer

In fact, we started to ship the Meteor Lake-based product at end of last year. Till now, we're getting amazing feedback, wonderful feedbacks from consumers, from developers. People are very happy with the performance and excited about the capability of NPU could do.

Pat Gelsinger
CEO, Intel

Yeah, and you know, what kind of things are they now doing with it that they didn't do before?

Jason Chen
Chairman and CEO, Acer

People are developing applications based on what the NPU could provide together with GPU, CPUs, including customer service area, including medical device that I saw you just show what Samsung could do. Acer Medical is also developing artificial intelligence-based medical image diagnostic solutions based on the OpenVINO that we are also providing and helping to bridge the medical device wherever the medical specialists are not necessarily available.

Pat Gelsinger
CEO, Intel

Yeah. I love what you say, that customers are shifting from search to ask.

Jason Chen
Chairman and CEO, Acer

Correct.

Pat Gelsinger
CEO, Intel

Right.

Jason Chen
Chairman and CEO, Acer

The usage model has been changed from search to ask. And also, when we were talking last night, first time ever, we see people have to learn about the computer. Now, the computer is adapting to people. We are very excited to see that happening.

Pat Gelsinger
CEO, Intel

Yeah. You know, the ability for hardware and software coming together, you know, are critical for these experiences and new Edge experiences. What do you think is next?

Jason Chen
Chairman and CEO, Acer

We think this is just the beginning, because usage model change from, search to ask, from computer, learning computer to computer learning about people. Now, the next, what we believe will be usage model innovation. There will be more usage model. According to Bloomberg, magazine, so I just read over the weekend, there are more than 12,000 startups being funded. Their business model based on artificial intelligence. We believe the new usage model will eventually prevail based on AI and become part of everybody's life.

Pat Gelsinger
CEO, Intel

Yeah. So, you know, I'm, I'm a geeky kind of guy, and I love hardware.

Jason Chen
Chairman and CEO, Acer

I'm a sales guy.

Pat Gelsinger
CEO, Intel

Yeah, yeah. So but what, what are you selling there, Jason?

Jason Chen
Chairman and CEO, Acer

Oh, what we're going to show to people is a new Lunar Lake-based computer.

Pat Gelsinger
CEO, Intel

Yeah.

Jason Chen
Chairman and CEO, Acer

A sleek design, sweet product line.

Pat Gelsinger
CEO, Intel

Well, I'm super excited. We're soon be seeing that launched and in the market, and we're gonna sell a lot of those. From one geek to a salesman, let's make it happen.

Jason Chen
Chairman and CEO, Acer

Here we go. This is all good, but the best is yet to come. Thank you. Thank you very much.

Pat Gelsinger
CEO, Intel

Thank you, Jason.

Jason Chen
Chairman and CEO, Acer

Thank you.

Pat Gelsinger
CEO, Intel

Take care. You know, I, I, I've had an incredible career, you know, incredibly impactful in so many different domains, but this is the most consequential time of our careers together. You know, the amount of innovation that we're seeing and the impact that it will have across industries. You know, we showed the healthcare example. Powerful example. You know, we're working on AI-powered SOCs for automobiles and changing the entire vehicle experience with Zeekr. We're helping companies with like Geek+, with their modernized logistics, robotics solutions, and AI technologies. We're using computer vision in new and powerful ways for environmental purposes, and working with companies like Blue Eco-Line for environmental testing and keeping rivers clean and changing our world for our children. We're also creating sustainable agriculture with companies like Nature Fresh Farms.

This is a world of new possibilities, better outcomes for every facet of our lives. It's a world that will increasingly run digital, and everything digital runs on silicon. And that increasingly requires the role that we play and the role that Intel enables. We were made for this moment. You know, and as you think about Intel, you know, it's our global scale from client, edge, data center, and cloud. It's our install base, all of which is based on open standards. We believe in building ecosystems that are flexible, customizable, and cost-effective. And we believe in Moore's Law, and it is alive and well. And as I like to say, until the periodic table is exhausted, Moore's Law isn't dead. It's alive and well. And we'll continue to build the trusted brand, leading on security and sustainability. And most importantly of all, we got you.

We have a powerful ecosystem, that's at the heart of our success. You heard from some of that ecosystem at the start, and it's only fitting that we bring our time toward the end to hear again from this ecosystem. Let's hear from them now.

Speaker 9

I'm with Intel. I am with Intel. I'm with Intel. I'm with Intel. I'm with Intel. I am with Intel. I'm with Intel. I'm with Intel. I'm with Intel. I am with Intel.

Pat Gelsinger
CEO, Intel

I just want to say thank you to all of you. To our entire Taiwan ecosystem, Intel is with you, I'm with you, and it's great to spend time with you here, our friends and our partners, because together, we have such incredible opportunities. I want to close where I started with that famous quote from Gordon Moore, but I want to tweak it just a bit. "Whatever has been done will be outdone." That's the spirit of Taiwan. That's the spirit of today's Intel. We're driven to outdo today's technologies and create what comes next, and I'm looking forward to working with you, our friends and our partners, to make it happen. Thank you so very much.

Powered by