Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.27
+0.73 (0.88%)
Apr 27, 2026, 12:24 PM EDT - Market open
← View all transcripts

Product Launch

Jan 5, 2026

Lip-Bu Tan
CEO, Intel

Good afternoon, everyone, and welcome to CES. And thank you for joining us today. It is an honor to be here with so many innovators, creators, customers, and partners from around the world. We are living in a moment where compute is being redefined. AI is reshaping every workflow, every industry, and every device, from cloud to edge. At Intel, our mission is to make that intelligence accessible, efficient, and ubiquitous. Over the last year, our team has pushed the limits of architecture, fabrication, and core optimization across hardware and software. I'm pleased to share that we have delivered our commitment on shipping our first 18A products by the end of 2025. In fact, we have over-delivered. I'm delighted to announce that we are ramping all three Core Ultra Series 3 die packages as we speak.

What you will see and hear later today are breakthroughs that can only be enabled by a tight integration of silicon design, advanced silicon process, and packaging technology. It is something that only Intel is uniquely positioned to do, working closely with our OEM partners and the open ecosystem. You will hear from Jim later on and his team talk more about all the technology we are announcing today. Our latest Series 3 base system truly represents a new class of computing designed for an AI-driven future, built to empower developers and creators and engineers to drive and deliver massive leaps in performance and efficiency. This announcement will represent the next evolution of the PC. So thank you for joining us. Let us talk about what is next and show you what is possible. Thank you.

Every breakthrough begins with silence. A moment of focus. A moment of building in quiet. But in that quiet, inside these fabs, the momentum has been rising. See, quiet doesn't mean stillness, because here, progress never pauses. It pulses, pushes, accelerates. Layer by layer, line by line, gate by gate. And now, it's ready to meet the world. Progress never pauses. And neither do we.

Operator

Please welcome Senior Vice President, General Manager, Client Computing Group, Jim Johnson.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

All right. Welcome to CES 2026. I'm excited to be leading the Client Computing Group PC and Edge products at one of the most exciting times in the industry and for our company. CES holds a special place in my heart because together we get to set the tone for what's coming this year. A special welcome to our partners. Thank you for coming and spending the time with us. You know, we could not do it without you. This partnership is what delivers. But before we talk about client, let's stop for a second and talk about Intel. The industry and Intel are both at a strategic inflection point. In 2026, we're delivering leadership products on a leadership process node, combining Compute, Graphics, and AI, and scaling it with customers to be the broadest AI PC platform together we have ever built.

In the last five years, we've invested heavily in capital, R&D, new fabs, new tools. Much of this investment aimed at pushing process technology forward, expanding manufacturing capacity in the United States and around the globe. 18A is the center of that effort. As Lip-Bu mentioned, we are in production and ramping Panther Lake. This is a big milestone as the world's first foundry node with RibbonFET and PowerVia technologies. What are those, and why are they important? RibbonFET is Gate-All-Around technology, transistor technology that provided our architects a way to precisely control electrical current, critical for performance and energy efficiency. Backside Power Delivery via PowerVia enhances power flow and signal delivery. What's this mean? It enables Intel to deliver up to 15% better performance per watt with more than 30% better chip density.

This was not an easy task, but we executed our plan and we're ramping to our expectations. Leadership manufacturing has again become a strategic anchor of Intel. Alongside delivering x86 solutions at unmatched scale, this is where our multi-gen effort to drive power-efficient performance, hybrid core architectures, and a multi-engine local AI are coming to life. We build for what's needed today, but work really hard with our partners to anticipate what's required next. And right now, as we know, that next wave is AI. It's everywhere. It's in every industry, and it's in every conversation. And we'll talk about it more today. It's a huge opportunity for all of us. And together, we've been out front shaping what it means for personal and edge computing. And with all the excitement around AI, we always remind ourselves, fundamentals still matter.

Everybody wants a cool, quiet PC, strong performance, all-day battery life when we're on the go, and the next step of the journey is here. This is Core Ultra Series 3. The first processor built with 18A technology, the most advanced process in the world. We've taken big steps in improving our major SoC IPs in virtually every piece of the chip design, bringing the latest and greatest on all fronts. We have new E-cores, brand new P-cores, a massive GPU with built-in ray tracing, and an NPU that packs more low-power AI in a very dense area, much denser than prior generation. Next-gen memory, next-generation I/O, and the latest connectivity, wired and wireless. These were all done to achieve a primary goal with our partners to scale power efficiency, leadership, performance across all workloads, expanding our graphics capability.

I can't wait till you see this, and enabling a broader set of devices with our OS partners, with our ODM partners, our OEM customers, and importantly, the ISV community. Together, we've nailed these goals with Core Ultra Series 3, achieved by combining the performance and power efficiency we launched across our Series 2 family of processors into one single product, and this isn't marketing speak. We will show you how it's deeply architected into the SoC through our fabrics and with our 18A process node. We've used the power efficiency of Lunar Lake as the architectural foundation of Series 3 and our roadmap moving forward. The graphics performance was already impressive, but we wanted to expand performance available across more devices. We've moved the GPU tile to its own chiplet.

Now we can attach larger or smaller chiplets based on the market segment needs and the device needs of our customers. We're continuing to push through our hybrid core strategy, redesigning cores for 18A so they operate at significantly lower voltage. It means greater per-core performance while increasing efficiency. We've improved the multi-threaded performance of Series 3 by adding up to eight E-cores. We completely redesigned our low-power island with dedicated low-power E-cores with their own cache that run hundreds of workloads that are optimized for maximum battery life. Examples like web browsing and video conferencing that demand power efficiency. The benefit? Better performance, much lower power, longer battery life. We've also consolidated all the I/Os onto a single chip. This was a big ask by our customers' design teams because they want to take Series 3 and expand its capabilities and its market coverage.

But people don't buy SoCs, and we know that. So beyond the SoC, we've deployed other technologies at the platform level focused on power efficiency. Intel's Intelligent Display Technology, context-aware charging, low-power video conferencing with our image processor, long-lasting connectivity with Intel's Wi-Fi solution. We bring this all together with our advanced Foveros packaging technology, delivering a full range of Series 3 processors. That architecture flexibility allows us to provide multiple configurations using a single package type up and down our customers' designs, something they've been asking for two generations. They have more memory options, and they now can use their own power delivery schemes for their product management and product capability. This gives them design flexibility up and down their product lineup, supporting them into the market and easing design efforts for their teams. So we just covered a lot on architecture.

Let's unpack what it can do starting with the x86 core. We're proud of this accomplishment. Core Ultra Series 3 delivers 60% more performance than Lunar Lake Series 2. And it's faster and more efficient than our highest-performing SKU running fewer P-cores. This focus, this maniacal focus our design teams have on power efficiency and the implementation of the low-power island dramatically improves a range of applications we use every day. A simple example would be streaming 4K video. On Series 3 draws now one-third the power than prior generations. A massive reduction that allows us to be talking about battery life in days, not in hours. It once again puts the myth to bed that x86 can't be power efficient even at higher performance levels. And this is the last time I'm going to say that.

So let's switch gears and now talk about the GPU, arguably the biggest update to Series 3. And to walk you through it, let me welcome our GM of PC products, Dan Rogers, to the stage.

Daniel Rogers
VP and General Manager of PC Products, Intel

Thanks, Jim. Graphics at Intel is an interesting history. Ubiquitously deployed with a massive install base, but previously lacking in key features, driver support, and frankly, performance. Five years ago, we set out on a mission to change that with the announcement of our Intel Arc graphics. And this was more than a hardware plan. Arc is our brand's commitment to deliver world-class, discrete-class software across a range of graphics products, including integrated. We've re-architected our entire software stack and strategy to include extensive game day testing, day zero driver support, and a full modern feature set. Last year alone, our engineering teams engaged 300 developers on pre-release titles and supported 50 Day 0 driver releases. This is light years ahead of where we were just a few years ago. And you've seen the results with Lunar Lake and then Battlemage, one of the best-reviewed cards of 2025.

And so today, we are very proud to announce the next-generation integrated GPU we've been referring to as 12 Xe, but the world will know as Intel Arc B390 graphics. It features 50% more graphics cores, twice the cache, and 96 built-in XMX AI accelerators for 120 GPU TOPS. If you're impressed by the graphics of Lunar Lake, Panther Lake has 70% more gaming performance and an incredible 50% more AI inference performance. And to prove it, we tested Panther Lake across a range of titles and APIs from DX9- DX12. And as you can see, the new Arc GPU not only delivers a consistently smooth gameplay, but when compared to the latest from AMD Radeon with similar power and similar memory, delivers an incredible 70% higher frame rate on average and 2x faster on select titles. This is truly amazing.

As an example, first-person shooters such as Painkiller and Delta Force will play at well over 100 frames per second with 1080p high settings and XeSS Super Resolution. To the developers and engineers at Intel and partners across the industry, thank you. Through your hard work, millions of PC gamers will be able to experience a great discrete-class game performance on a thin and light laptop for the first time in 2026. But as impressive as this performance is, in a way, it's a look only at today. This is traditional raster performance with AI upscaling. What's unique about our graphics is that we built this for the future, and we call this modern rendering. This means better lighting with ray tracing or other demanding visual settings, sharper visuals with high-quality textures and larger geometric assets, and the ultimate in smoothness through AI-generated frames.

Today, we're excited to confirm that the new Arc graphics in Panther Lake will be the first integrated graphics ever to ship day one with AI-based multi-frame generation, delivering three AI-generated frames for every one rendered frame. Thank you. It's a big moment for our graphics team. A great example of the benefits of these is Battlefield 6, which just launched late last year. Intel has been working with EA closely, the developers of Battlefield, to unleash the full performance of this title on Arc graphics. With the most demanding visuals offered in the game called overkill settings, Arc B390 can already drive a smooth experience, smooth frame rates using super resolution. But as we turn on multi-frame generation in the driver, you can see that we can scale to an incredibly smooth experience, over 120 frames per second and nearly three times faster than AMD.

To share more about this collaboration between EA and Intel, I'm very honored to invite Fellow and VP of Technology Partnerships at EA, Jeff Skelton, to the stage. Please welcome Jeff.

Jeff Skelton
VP and Technical Fellow of Technology Partnerships, EA

Good to see you.

Daniel Rogers
VP and General Manager of PC Products, Intel

Jeff, thank you for joining us today.

Jeff Skelton
VP and Technical Fellow of Technology Partnerships, EA

Pleasure to be here.

Daniel Rogers
VP and General Manager of PC Products, Intel

So we just walked through the incredible performance of Panther Lake's Arc graphics. We'd love to hear your perspective of this collaboration between EA and Intel.

Jeff Skelton
VP and Technical Fellow of Technology Partnerships, EA

Absolutely, so we set out to build the best Battlefield ever by reimagining this iconic franchise from the ground up, pushing the limits of large-scale multiplayer and tactical destruction experiences while redefining the FPS genre. We want all players to enjoy this amazing experience, and a partnership like this one, like the one we have with Intel, helps us achieve that as we optimize for frame rate, stutter reduction, stability, and more. This work ensures that we meet player expectations, whether they are running desktops with discrete GPUs or mobile processors like Panther Lake. You know, I've been making games for a very long time, and if you told me a few years ago, I'd be up here on stage with Intel talking about how our games run and how well they run on integrated graphics, I wouldn't have believed you. The performance we see on Panther Lake is remarkable.

Daniel Rogers
VP and General Manager of PC Products, Intel

Yeah, thank you, Jeff. This means a lot from you and the team at EA. Our engineers have been working with EA for the very beginning of Panther Lake, from the first A0 silicon to ensure a robust driver support at their beta as well as launch. We had two key focus areas. One was optimizing for the hybrid CPU architecture that we've had since Alder Lake, and two, implementing the latest AI-based XeSS technologies into the title.

Jeff Skelton
VP and Technical Fellow of Technology Partnerships, EA

The addition of AI into the rendering pipeline has been a game changer. At this point, I don't think you'd want to build a game without technologies like XeSS.

Daniel Rogers
VP and General Manager of PC Products, Intel

Anything else coming, Jeff?

Jeff Skelton
VP and Technical Fellow of Technology Partnerships, EA

Absolutely. We're working on a native integration of XeSS 3 as we speak. Stay tuned.

Daniel Rogers
VP and General Manager of PC Products, Intel

Jeff, thank you so much. We're so excited to see what's next from EA.

Jeff Skelton
VP and Technical Fellow of Technology Partnerships, EA

Thank you so much.

Daniel Rogers
VP and General Manager of PC Products, Intel

Finally, with the performance of our graphics integrated into a low-power x86 SoC, plus the progress you've just seen in software from Intel, it's natural that we would think even beyond laptops. Today, we're excited to share with you that we'll be launching an entire handheld gaming platform with Panther Lake. We'll have more news to share on that from our hardware and software partners later this year. I hope you stay tuned. Thank you so much.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

All right, Dan. Good job. Thank you, brother. If you have any interest in mobile gaming, you've got to buy that PC. So now let's turn our attention to AI. To deliver new and better experiences of AI PCs, it takes a massive investment by our software partners. When we created the AI PC category back in 2023, we understood it was a partnership between our hardware and infrastructure and the software ecosystem that would ultimately deliver on the experiences. We're also aware that AI workloads, while they're moving to the edge, it's not going to happen overnight. And even today, they're predominantly handled in the cloud. But that's changing. We're going to show you some of this. We committed and delivered the largest AI PC footprint in the industry. This sparked the investment of the software community, and that investment accelerated innovation.

It continues to accelerate innovation in very short periods of time and cycle time. Early inference, it's hard to believe, just two years ago, it was all about background blur and noise suppression, no dogs barking, and then we evolved to GenAI and advanced transformer models. It really started rewriting the personal in our personal computer, offering something different for all of us, augmenting and accelerating our daily work, and I have a personal experience I'd like to share. My son leads a large software team in smart energy services, and AI-enabled coding is completely overhauling how they develop products. Software engineering is not about typing syntax. It's about architecture and design. AI helps them automate the construction phase, freeing up massive amounts of engineering cycles to design better products, better systems for scalability, reliability, and importantly, observability.

Now with Vibe Coding, his team can spin up custom observability frameworks and analysis tools in days rather than months if they ever even got around to it. This is unlocking an amazing ability to instrument all aspects of his software products. The best analogy, it's not perfect I can think of, is just think about what CAD tools did for mechanical and civil engineers, eliminating the need of tedious drawing on draft boards. That's the type of thing that's happening in the various industries we serve. We have invested significantly in software engineering and in infrastructure. We support hundreds of leading ISVs with the tools they need to deliver new AI applications optimized for Intel's CPU, GPU, and NPU. For example, Adobe Premiere Pro, they're using the Arc GPU Dan just shared to search for media.

You simply describe what you're looking for, and it'll find the match, even if it's unlabeled or just buried in footage, something you cannot do without this capability. Another example is Zoom. They have a virtual ring light, and their model runs on our low-power NPU that brightens our image while automatically dampening and dimming the background. We also support the AI models that people depend on running locally on the PC. Breadth of, of course, is important, but also quickly optimizing for updates and enabling new models as they become available. Very important, so for example, in PRC last October, where some of the fastest innovation is taking place, we had day zero support for Alibaba's new Qwen 3 LLMs on Core Ultra, so developers could simply plug and play.

We ensure software runs in the richest set of frameworks and tools and have support for major industry deployment paths like llama.cpp and PyTorch. Intel's OpenVINO tools provide deep production-ready optimizations across our engines, so developers can deploy state-of-the-art GenAI and Vision models immediately with no custom tuning or hardware-specific rewrites. Working closely with Microsoft on Windows ML, we're the only silicon vendor supporting APIs on all three processors: CPU, GPU, and NPU today. To ensure consistent performance, we've also integrated key open optimizations such as primitives deep within Windows ML. Yeah, so thank you, Pavan and Team Microsoft. This would not be happening without you. We're also a time-to-market partner for Copilot+, which you will find in all SKUs of Core Ultra Series 3 families as they ship. Thanks again, you guys. Couldn't do it without you.

Our job, this is on a foundation of leadership silicon, and we've taken a major step forward with Series 3, up to 180 total platform TOPS, 120 on the GPU with XMX, and when you combine OpenVINO with full access to 96 GB of memory, this SoC can handle a 70 billion parameter model in a 32K context. Larger context is extremely important for more complex and deeper use of LLMs locally. None of our other competitors can do this. You complement this with the low-power NPU of 50 TOPS, always on, always inferencing use cases like video conferencing security. When you put it all together and you look at the stack, we're at the forefront as an industry redefining an intelligence PC stack where it combines hardware, software, the OS, firmware, and the infrastructure of local intelligence for all client devices at the edge, including PCs.

Like any platform effort, it's only as good as we scale. Let me attempt to do some math. We ship more than four ZettaOps, or four billion TOPS of inference compute to the edge in the market. What's that equivalent to? 40 data centers worth of compute across the edge. Why does this matter? How do we activate this compute more effectively? We have a couple of examples to share. One example is our work with ByteDance. In 2025, they launched AI Clipper, a cloud-based feature in CapCut for short video summaries. They're expected to reach a million users this year. It's straining their cloud capacity. ByteDance looked at this, seized the opportunity to leverage the massive TOPS in Intel AI PCs and paired it with their cloud. The results have been amazing. Same quality, better performance, lower cloud cost.

This is a great first step in where we think the industry is going to end up. As you can see, in a world with growing constraints on data center infrastructure, more AI can, and we believe will, move to the edge. But it won't be exclusively client. It has to work seamlessly with the cloud. And Series 3 represents the start of this hybrid AI era. Putting this into practice is a huge technical challenge. Our new Intel AI Super Builder platform is a breakthrough which helps client and cloud models start to work better together. Local AI executes the task securely, keeping your data on the machine, while cloud AI handles global reasoning, planning, and multi-agent orchestration. The communication between local and cloud enables greater security, greater privacy, better performance, and better cost.

We're actively working with the ecosystem partners to augment cloud-only solutions in a hybrid environment. One of the absolute leaders in this area is Perplexity. Let me welcome CEO and co-founder of Perplexity, Aravind, to the stage. Good to see you again. Thank you so much for coming to be with us.

Aravind Srinivas
CEO and Co-Founder, Perplexity

Thank you. Yeah.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

Hey, so you've been a champion for hybridized AI compute for quite some time, right out in front of it, and I know with the Comet Browser that launched last summer, and I'm a big fan. I use it often. You put a lot of attention on moving as much compute as possible locally. Can you share a little bit about your vision and how to do this?

Aravind Srinivas
CEO and Co-Founder, Perplexity

Yeah. First of all, thank you for having me here, Jim. And I love watching all these talks about chips and all the technical details because I really think deep tech is the real thought leadership in AI. So thanks for doing that. And yes, I've been talking about localized compute for quite some time. The past summer, we launched the AI-powered browser Comet. What got a lot of attention among people is the AI. But to me, what's really interesting about Comet from the get-go is we really focused on keeping as much compute and data of the user locally as possible because that way users can really trust the product and have privacy guarantees. So we believe localized compute will only get bigger and more important as AI advances further and further in capabilities. And there's four reasons for this.

Number one is, first of all, it's very simple: performance issue. AI feels slow to a lot of people, and people are impatient. So think about all the times you've felt delays in computing before. That spinning beach ball is your database traveling thousands of miles to a server farm and back. So local compute removes that journey. So when processing happens right where you are, there's going to be no lag. And you're also not sending all your local files to the server farm. You keep decreasing the latency the more and more you localize. And so it's going to be faster and better.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

Yeah.

Aravind Srinivas
CEO and Co-Founder, Perplexity

Second, it's the privacy and security argument. Now, remember I said these get more important as AI advances further and further. With great power comes great responsibility, and privacy and security drove our original thinking behind hybridizing the compute and the Comet browser. We built an AI browser because you don't just browse on the internet anymore. That's how browsers started in the '90s. Today, you live on the internet. You work there.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

Amen.

Aravind Srinivas
CEO and Co-Founder, Perplexity

The internet is basically in every part of our lives, and nobody wants all of that to be uploaded to some frontier model AI company servers, so that's why there are some AI companies, even though they want all of your data, that's not what the users want or the enterprises want. Market demands local compute, and so the more you localize, the more secure each user's most personal queries are. AI can begin to be more useful to you and start getting more personalized to you.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

Yeah.

Aravind Srinivas
CEO and Co-Founder, Perplexity

And you start connecting your local files to the machine. You're guaranteed of security. You own your files. You own the data. And over time, you own the intelligence. It's your brain, and you own it.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

Nice.

Aravind Srinivas
CEO and Co-Founder, Perplexity

The third argument, simple economics. Unless you've been living under a rock over 2025, you probably heard that AI is getting really expensive. It's different from software. Every query is costing you real money. That's inference costs. And in 2025, another hype topic was data centers. But when you centralize inference and data centers, there's huge bottlenecks in the demand. Localized compute inverts that model entirely. Infrastructure costs will drop dramatically for everybody as the bandwidth expenses plummet. So it's a simple economics argument. Finally, fourth reason, especially for businesses, control. This is where it gets really serious. Now, four of the Mag 7 and 92% of the Fortune 500 companies entrust Perplexity with their data on our servers. That's a big responsibility. And as a founder, I really know how they feel. That's why we take data control pretty seriously at Perplexity.

And with local compute, whatever is your competitive enterprise intelligence should remain exclusively yours. And that's the only way to do it because it's right there on your machines. You own those machines. You control those machines. Thereby, you control the intelligence. And you control its jurisdiction, its compliance, everything. So that's it. Four arguments, performance, security, economics, control. But these make local compute such an obvious thing to work on. Every business is going to care about this. Every user is going to care about this. And they always will. So we'll be launching Comet for enterprise next month based on all these exact principles. And the entire industry, as Jim and Intel know, is going to move more and more towards local compute. So thanks a lot for having me here.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

You know we share your vision. Thank you so much for your leadership.

Aravind Srinivas
CEO and Co-Founder, Perplexity

Thank you.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

Our team looks forward to doing the hard work to make this true.

Aravind Srinivas
CEO and Co-Founder, Perplexity

Thank you.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

Thank you so much for coming.

Aravind Srinivas
CEO and Co-Founder, Perplexity

Thank you.

Jim Johnson
SVP and General Manager of Client Computing Group, Intel

Thanks, Aravind. We spent a lot of time talking about how AI is changing the landscape of the PC. It's also a driver for edge, a key pillar of our client business, living between the PC and the cloud. Agentic and physical AI show up as Vision Language Models, or VLMs. Think of it as a close cousin to LLMs. And for example, in industrial environments where automation is used for quality controls, we're now seeing VLMs being deployed that are 140 x bigger than ones just a couple of years ago. And we're taking the same approach on the edge as we do with the AI PC, establish a large footprint of leadership hardware that activates investments in applications by our software partners. Together, we infuse AI and power-efficient x86 right where the compute happens.

We're bringing Series 3 to the edge with all of its latest IPs and technologies, and because of the growing demand of AI at the edge, we're now accelerating the launch of 18A products for this market to align directly with the PC launch so edge has the newest hardware at the same time. We design and test and validate for high and low extreme temperatures, 24/7 reliability, and extended life support because these are additional table stakes for our critical edge customers. You will see edge devices in hundreds of form factors across key segments like smart cities, factories, healthcare, and all sorts of automation systems. The demand is huge. Growing in Series 3 was designed for this task. Just look at this data. Nearly two times better LLM latency performance and over 2x better TCO for video analytics.

These SoCs are great at perceiving the environment and then using the built-in processors all on chip to control the machinery. There's no need to offload AI to a discrete card. We're also seeing a lot of interest in robotics, stationary control arms, autonomous mobile robots, and even humanoids. And you'll be able to see one if you're here at the show right back here running Series 3. And to facilitate these emerging use cases, we have worked on a reference board and a dev kit with edge ODMs that includes Intel's robotics suite, the tools, the frameworks, and the applications all available at the launch of Intel hardware. So let's wrap up with what's next. Our partners have over 200 designs. There will be a Series 3 option for everyone. Gamers, as you see, that want the best performance on the go will want this processor.

vPro for business, robotics, and industrial edge devices. This will be the most broadly adopted and globally available AI PC platform Intel has ever shipped. You can tell we're excited. You can tell I'm excited for Core Ultra Series 3. And instead of doing a recap of all of this, the numbers speak for themselves. Longer battery, faster graphics, higher performance, more AI. And you don't have to wait. Core Ultra is ramping now. First consumer designs available to order starting tomorrow with more designs rolling out all year. We've covered a lot of ground today. I think we should take away one thing. And that is we are at the strategic inflection point for AI computing in client together. Our job, build leading process and SoCs. Our customers are delivering systems that will surprise the market.

Our software partners, as you see, are investing in these platforms for scale. For those of you in person, hang tight. I have another. Do I have it up here? Oh, I don't have it up here. There's some really cool demos in the back. Please, when we're done, walk back there and see them. Let's have a great 2026. Thanks, everyone, for joining us.

Powered by