Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.39
+0.85 (1.03%)
Apr 27, 2026, 12:26 PM EDT - Market open
← View all transcripts

Intel Innovation 2023

Sep 19, 2023

Matt Ramsey
Analyst, TD Cowen

Please welcome, Pat Gelsinger.

Pat Gelsinger
CEO, Intel

Hey, so excited to be back. Developer communty and the developer conference, and, you know, cool to get together and talk about new products and to share a vision of the future together. I'm excited to unlock these massive opportunities created by generational shift in AI. Industries racing toward it. You know, we're bringing AI and making it accessible at scale, but also at the client and edge as well. You know, we have exciting achievemens to share with you today. You know, advancements in Moore's Law, the underpinning of everything we do, and, based on the choice, trust of our open ecosystems, that together we can do almost anything and enable this continuous innovation.

Like the intro video was making a little bit of fun of me, we got a lot to cover today, so let's dive in immediately with a demo of what that video was all about. To help me with this, please join me in welcoming Rich Felton, the Director of Sports Science and COO at ai.io. Rich, come on up here.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Hey, Pat. Thanks for having me.

Pat Gelsinger
CEO, Intel

So, you know, Rich, you know, the CEO gig ain't gonna go on forever, so I am, you know, looking at what's the next career opportunities here. So, you know.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

We're definitely going to take a look.

Pat Gelsinger
CEO, Intel

Okay. Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

But what we do, for the audience, is we democratize sport and specifically give players or anyone in the world, even yourself, the chance to get access to opportunities to trial for professional sports teams, scholarships, et cetera. And we do that all through a mobile phone using AI and computer vision. We've been working with Intel since 2021, and what we've built together is a platform kind of end-to-end, that allows these players to download an app from anywhere in the world, do their drills-

Pat Gelsinger
CEO, Intel

Anywhere? Athletes anywhere.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Anywhere in the world.

Pat Gelsinger
CEO, Intel

Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Anywhere in the world, do their drills. You can see some videos up there, actually, that show that. We've had some great success stories. We've had players, even this year, sign for Premier League clubs, play in the Premier League, all the way through to players in India, who actually downloaded our app from a shared community phone. So in a remote village, download the app, not-

Pat Gelsinger
CEO, Intel

They might never had t he opportunity.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Never had the opportunity.

Pat Gelsinger
CEO, Intel

Wow!

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Fast-forward, they're now on five-year, fully paid residency programs for all their education, all their football, and that happens through that power of AI through our mobile p .

What it means for the clubs and the organizations is we've just massively expanded their reach globally so they can reach more people, but there's a sustainability story to that as well. So now they get that data up front. It's all analyzed data. So with that data up front, it means they can be more targeted with where they travel to and not waste carbon emissions when they don't need t Uh-huh.

Bring it to their doorstep.

Pat Gelsinger
CEO, Intel

Yeah, and using our te chnologies end-to-end, as the slide shows, this is really impressive, Rich.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Yeah, and to the point of the technologies, the critical thing for both player and club is speed. So we use cloud- -based services on the AWS clouds. We've got... You know, there's lots of wins around that, around cost optimizing those AI workloads, which can be really expensive. It's video, it's tracking, it's data. But for the clubs, they're autonomously finding players through complex data sets. So for those clubs, we've now had situations where likes of Chelsea, good team.

Pat Gelsinger
CEO, Intel

Eh, not bad. Not bad.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

From the point of the data or what you see on the screen with the tracking, from that data being flagged as a player is good enough, they sign players within two weeks. That's normally about an 18-month process.

Pat Gelsinger
CEO, Intel

Wow! Wow.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Huge cost saving for doing that.

Pat Gelsinger
CEO, Intel

Oh, that's incredible. So-

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Yeah, for players, I think it's... Or anyone here actually, if you want to turn a passion into a profession, download the app, and you can do this.

Pat Gelsinger
CEO, Intel

You know, I wanted to get a sense for how well am I doing for my second career?

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Yeah.

Pat Gelsinger
CEO, Intel

Right? You know, I went through these tests, and so I saw in the opening video. So what's it look like, Rich? Can you show me?

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Should we take a look? Should we look at your-

Pat Gelsinger
CEO, Intel

Sure. Let's take a look here.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Let's go over to the computer. So obviously, you downloaded the app outside Intel HQ there. We gave you a bunch of drills to do. Film those. .

Everything gets tracked, all the data comes back. It's all benchmarked in relation to something. So we've benchmarked you in relation to Major League Soccer. So that's what we're going to look at for yourself. So if we click on Pat here. Oh, there we go.

Pat Gelsinger
CEO, Intel

There we go.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

This report now, for everyone here, unless you're a sports scientist or a coach, these metrics probably don't mean to you.

Pat Gelsinger
CEO, Intel

Yeah, I don't get it.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

They're just numbers. Yeah, we can click on here, and we can look at what it was, but what we actually do with our benchmarking, and part of making things easier for the player and for the coach and the scout, to make this really simple, I'll just turn the benchmarks off for a moment. So we turn everything to a simplified one-

Pat Gelsinger
CEO, Intel

These are my scores here.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

These are your scores that we've-

Pat Gelsinger
CEO, Intel

Okay. Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

We've made into a uniform 1-10.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

This is easier for everyone to consume. A player can see, if I'm a ten on something, I know I'm brilliant.

Pat Gelsinger
CEO, Intel

Mm.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Same for the scout. If I'm a ten, I'm interested. So the blue here is your physical and technical scores with the ball and running around.

Pat Gelsinger
CEO, Intel

Okay. Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

The orange-

Pat Gelsinger
CEO, Intel

I grew up playing soccer, so this is-

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Well, well, the orange scores are your cognitive scores.

Pat Gelsinger
CEO, Intel

Oh.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Thankfully, cognitive scores are off the charts. What I can actually do, if I show you the Major League Soccer benchmark for your position, I think we've got you in for... as an attacker or a striker. These gray lines are the average for Major League Soccer.

Pat Gelsinger
CEO, Intel

It doesn't look good.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Well, you can see physical and technical, maybe as expected.

Pat Gelsinger
CEO, Intel

Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

You're not at Messi level.

Pat Gelsinger
CEO, Intel

Okay. Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

All right? But cognitive, again, unbelievable.

Pat Gelsinger
CEO, Intel

Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Through the roof.

Pat Gelsinger
CEO, Intel

Maybe I'm in the right job.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Yeah, yeah, maybe so.

Pat Gelsinger
CEO, Intel

Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Maybe so.

Pat Gelsinger
CEO, Intel

Okay.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

I think... Okay, so possibly not gonna-

Pat Gelsinger
CEO, Intel

My hopes are dashed.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Yeah.

Pat Gelsinger
CEO, Intel

Yeah.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Possibly not gonna have that career tomorrow-

Pat Gelsinger
CEO, Intel

Okay

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

... but those scores, those cognitive scores were flagged as interesting. It's a great technical acumen. So one of the teams, the local team, still wanted to give you something-

Pat Gelsinger
CEO, Intel

Oh, right.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

to show that you put the effort did over Charles.

Pat Gelsinger
CEO, Intel

Thank you to the Earthquakes locally.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Thank you to the San Jose Earthquakes. Exactly.

Pat Gelsinger
CEO, Intel

Hey, so what do you think, everybody? Is this pretty impressive? Hey, yep. You know, and, you know, right. With Intel as the collaborator, right, you're using our technologies everywhere. We're excited to just the way that you are democratizing sport.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

We're excited with you. We're going to other verticals. There's lots of things we can do across other industries as well with track... Think about our mobile phone, health industries.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

There's lots of areas we can go into. Anyone in the audience that's interested, developers, you know, QR code there to scan. You can learn more about our story with Intel, how we work together, and actually what we're doing. If anyone's interested, reach out to us, and let's collaborate more with everybody else.

Pat Gelsinger
CEO, Intel

Excellent. Thank you so much, Rich.

Richard Felton-Thomas
COO and Director of Sports Science, ai.io

Thanks for having me. Appreciate it. Okay, soccer CEO, I guess we got the answer. So, you know, but, you know, this idea today of all the things that silicon is touching, and literally we now have a $547 billion silicon industry, but even more impressive is that it's powering this global technology economy that's now measured at $8 trillion. And I would ask you, what aspect of your life is not getting more digital, right? Everything. Sports, as we just saw, but our entertainment, our social experience, our health, right? Work, you know, medical, you know, everything is becoming more digital. It's a foundational aspect of all economy and human experience. And now, you know, with the superpower of AI, we're seeing this increasing autonomy and agency, you know, machines acting on our behalf.

Pat Gelsinger
CEO, Intel

We're creating this foundation for the next generation of opportunities and experiences that enable a better future for every person on Earth. Right? With that, I welcome you to the Siliconomy. As these advances in semiconductors enable new levels of human achievement, this need for compute and capabilities is exponentially increasing. You know, in Moore's Law, in a nutshell, you know, as we're increasing transistors, compute, and capabilities, we're decreasing at an exponential rate, size, cost, and power. This is the magic of silicon. You know, all of this, the most plentiful material on Earth, God has given us this unique gift called silicon. Today we're seeing that just infiltrate organizations, where over the last several years, we've seen this 4X increase in managed devices. That's expected to triple again for more than 15X growth.

We're just seeing everything become a computer, more plentiful, powerful, affordable processing. And these computers are now becoming part of your thermostat, you know, your picture frames. Everything is becoming smart, and AI is representing a generational shift in how computing is used and giving rise to the Siliconomy. But inside of that, a simple rule: developers rule. You run the global economy, right? Not politicians, you know, not CEOs. Developers are the ones running this global economy, and you're powered by Moore's Law, and it's your creative passion, this insatiable drive to innovate, combined with Moore's Law, the fuel, and those coming together enable the Siliconomy. Every one of us is part of it, where we're seeing this evolving economy enabled by this, magic. We're replacing, you know, industries like oil that defined geopolitics for five decades.

It's now silicon, the Siliconomy, and the technology supply chains that it enables. You know, for developers, this is massive social and business opportunities to push the boundaries, you know, creating the solutions that the world will change and be enabling the improvement of every soul on Earth. And it requires a range of different capabilities: next generation CPUs, NPUs, GPUs, chiplets, new interconnect, specialized accelerators. And our commitment to you is give you the coolest hardware and software ASAP. And we will do that, and we'll do that with the Intel Developer Cloud. And just to show it off, here's my Python code of the day, and it's syntactically and semantically correct, right? So if you want to join the Siliconomy, right, and you want open, trusted, secure capabilities that are the latest, coolest stuff, you go to the Intel DevCloud.

Last year, we announced at this event, we announced the beta of the Intel Developer Cloud. You know, the path to the latest gen architectures test, evaluation, to give you the developer, to put you in the driver's seat. Today, we're thrilled to announce the general availability of the Intel Developer Cloud. You know, from the big stuff, you know, Gaudi 2 for large language model training, you know, to broad deployment with fourth gen Xeons, to our Xeon Max series for bandwidth and HPC workloads, to running small, you know, systems as well. The latest, coolest stuff available on the Intel Developer Cloud, enabling, you know, large-scale, small-scale training and inference solutions available to all.

And with that, you know, we're, you know, enabling these next generation of capabilities along with our software platforms, our oneAPI toolkit, you know, our OpenVINO toolkit, many of our developer tools. We have three different tiers of service. You know, a freemium open service, a commercial premium, and then finally, an enterprise-grade service. Also, as I said, you know, it isn't just what's available today, it's also what's available tomorrow. So we're putting pre-production hardware in place, beta, so that we enable you to take advantage of this sooner and earlier in your developer, you know, cycle, so that by the time the hardware is available in volume, hey, you've already been working on it for months, quarters, and even years. And it includes modern interfaces and workflows to help you optimize your end-to-end solutions. Easy to use, easy to install.

This easy path to access and use Intel optimized AI software, hardware, platforms, you know, deliver these from small to large amazing enabling workloads, but doing it from your PC and having easy access to this cloud. So it lowers your cost and friction to be able to do it. Just because you're my best friends and you're here today, you know, we're making... Wasn't that fun? You're my best friends. You're not my best friends. Oh, I'm so sad. Okay. But just because you're our best friends and you're here today, you know, we're giving you free access, a week of free access to the Intel Developer Cloud. This is part of the package that you got, coming in, and I hope you all find a friction-free, expedited way to get to the cloud. Now...

Any time we're together, let's have a little bit of fun. So we decided, as part of our conference, today, to have a little bit of a Shark Tank-like event. And we've been working, we call it the Intel Ignite community, that we work with to take advantage of our technologies, platforms, but enable and unleash their passions to leverage Intel, but really turn them on in new and powerful ways. And you're going to get to see on the show floor, a number of these different companies that, you know, we're working with in this way. But we narrowed down to three of them, and they represent artificial intelligence, next generation systems, platform, and edge to cloud to space. And we're going to share their ideas at the, throughout the, keynote this morning on the show floor.

At the end of the morning, we're going to take these three, and we're going to pick our first winner. What I'd like to do now is give you the first peek of one of these three, and that is Deep Render. You know, co-founders are clearly geeks of geek, geekdom, and they identified a specific video rendering problem with bandwidth and size that they set out to solve using breakthroughs in AI. Let's see the De ep Render update.

Arnaud Lambert
Director of Product Management, Starkey Labs

The entire modern internet is threatened.

Speaker 28

There's far too much data and far too little bandwidth. The solution? Deep Render's AI-based compression. As humans, we're just moving from text-based communication to video-based communication.

Speaker 25

But with more advanced classes of video, the data volume goes up by 10x-100x.

Speaker 28

That often manifests itself as lower resolutions, lower frame rates, more outages.

Speaker 25

With Deep Render's AI-based compression technology, we can fix this problem.

Speaker 28

Chris and I both studied at Imperial College. We did a master's in computer science.

Speaker 25

We connected over the homework. Very, very challenging homework. We worked together on a project where we had to deliver large amounts of videos, and it just didn't go very well, and this is how we stumbled into the fascinating world of compression technology. Deep Render is proposing the next step change by infusing it or bringing it to an AI-only compression pipeline.

Speaker 28

Compression is all about exploiting redundancies. The way we exploit redundancies is far more fluid and far more fine-grained, and we're able to basically locate every single pixel and say where it moved and we're software-based. So you can basically update it whenever you want.

Speaker 25

It's very, very important to keep in mind how revolution ary the change to AI-based compression is. Deep Render is building up compression technology around a much more modern and solid framework of AI.

Speaker 28

We're at 5x smaller file sizes now, and Chris and I both believe the limit is quite a lot higher, up to 50x.

Speaker 25

Because it's such an incredible, important technology, someone needs to take leadership on it. Deep Render have to step up and take the mantle.

Pat Gelsinger
CEO, Intel

What do you think? Candidate number one. Okay, we got two more. Great, and then we'll see which one's our first winner this year. You know, this field of AI is just, you know, incredible, but it's also old. It's 50 years old, since the beginnings of AI. And the first 40 years, as I like to say, how much happened in AI? Nothing. Multiple winters, nothing happened. In fact, when I was architecting the 8086, one of the things we said is: We're going to make the 486 a great AI chip. That was in the 1980s. What happened? Nothing. Right? Then, massive quantities of data. Right? You know, breakthroughs in computer science a nd algorithms and big enough compute, and all of a sudden, the last 10 years of AI have been just incredible, and it's been redefining itself with new algorithms and new insights emerging computer science a ging.

And with that, you know, we are participating, and our Intel Gaudi processor has been demonstrating, you know, performance capabilities rivaling, you know, the market leader. Even the largest, most challenging AI and generative AI problems are being delivered and executed on our Gaudi processor. We're not only rivaling the market leader, you know, but at much better TCO. Today, I'm excited to announce we've secured another major partnership on our journey in this area, and that's with Stability AI, one of the algorithmic and market leaders in this space. With them, we're excited to build the largest AI supercomputer built in Europe, we believe, you know, running all entirely on Xeons and 4,000 Intel Gaudi 2.

Stability AI is an anchor customer, you know, and just an enormous build-out that we're undertaking with them to be one of the top 15 AI supercomputers in the world. We're quite excited about the partnership. But we also have key OEM relationships as well. You know, partners like Dell are now partnering with us to deliver Gaudi for cloud customers, for enterprise customers, and really fill out that AI portfolio, beginning with Xeon for general purpose CPUs to Gaudi AI accelerators, you know, for the largest inference and training environments. This complete AI continuum of capabilities, enabling applications that are an AI, AI-enabled, you know, to on-prem inferencing and complete high-end training systems. With Dell, the entire range of solutions.

Coming back to the Meteor Lake, coming back to the machine learning inferencing, you know, challenges and some of the performance results we're now demonstrating with Xeon. Xeon comes with AMX, the AI enhancements, as part of the architecture, and we're now producing some of the best results—no, the best results on a CPU for AI workloads and demonstrating workloads like ChatGPT-J. You know, being able to do 100-word summarizations of news articles, you know, based on 1,000- 1,500 source word contents, and being able to summarize those offline about two paragraphs per second and online, one paragraph. So being able to bring, you know, real-time capabilities on standard Xeon platforms.

We've also submitted, you know, the first MLPerf results with Xeon Max series, up to 64 GB of high-bandwidth memory on the CPU package as well, and being able to achieve 99.9% accuracies on GPT-J. All of these just showing the momentum our CPU and accelerator line is having in the marketplace. So let's hear from another one of our partners on this journey. Let's hear from Alibaba, and how they're using Xeon for their AI workloads.

Speaker 27

Thank you, Pat. It has been an incredible journey working together as we navigate this new era of AI. As the largest cloud providers in China, we're looking for reliable compute solutions that can support and facilitate a variety of AI workloads, including for large language models and generative AI applications. Keeping performance high and costs down are critical for our joint success. As an early adopter of Intel fourth generation Xeon CPUs, we have seen impressive performance for real-time inference over its predecessors, thanks to Intel's built-in AMX accelerator and software optimizations. Our generative AI and large language model, including Alibaba Cloud's Tongyi foundation models, have benefited greatly from those optimizations and achieved an average in 3 times inferencing acceleration in response time on CPUs. Today, we are serving our large language model by desktop in our production environment, using Intel's hardware and software solutions.

Looking ahead, we are very excited to adopt the Xeon Max CPUs with high-bandwidth memory, which offers an additional 2x performance improvement over traditional DRAM for serving large language models. We look forward to unleashing the power of Intel technical advancement for superior performance and higher efficiency across our AI workloads. We are committed to keep innovating together with Intel and push the boundaries of generative AI and cloud computing to the next level. Finally, we wish this year Intel Innovation a great success. Thank you. Back to you, Pat.

Pat Gelsinger
CEO, Intel

Thank you, Joe. And, we're really excited, partnerships like Stability AI, with Dell Computers and Alibaba, and, you know, we're just working to bring AI capabilities into every platform, every product that we build, for our highest end, right, you know, all the way down to our client, offerings. And at the high end, you know, Gaudi 2, right, delivering impressive results today. Xeon, impressive results today, but the roadmap is strong. In fact, we've already gotten Gaudi 3 silicon just out of fab and now in packaging, and this will be followed in 2025 with our Falcon Shores product. So 2023, Gaudi 2; 2024, Gaudi 3; 2025, Gaudi, Falcon Shores, where we bring together Gaudi with our GPU, capabilities into a, single, platform. Simply put, our roadmap is extremely robust, and we are executing aggressively to bring this together.

And of course, that execution is based on Moore's Law. And as, Gordon said in his eponymous, law, that, you know, nothing can go on forever. "No physical quantity can change exponentially forever, but it can be delayed." And he was often amazed by just the creativity of how we just continue to find workarounds to barriers. And we at Intel, we see ourselves as the stewards of Moore's Law and this relentless pursuit of computing and efficiency at scale. And we will not rest. We are committed to continuing, you know, this, pursuit. And as I like to say, until every element of the periodic table is exhausted, we ain't done. And we're committed to Moore's Law so that you can develop with confidence on our platforms. So let's take a look at our second Ignite nomination that we have this year.

This one was just a little bit mind-blowing for me, right? As seeing, you know, unlocking the power of wet science by using computational biology and bringing those together. It's a revolutionary young startup, two longtime best friends, and being able to do radical breakthroughs in cost, time, and numbers associated with it. You know, and just the potential to fundamentally disrupt medical, pharma, health, food, and environmental industries. So let's see from Scala right now.

Speaker 28

Imagine a world where therapeutics and vaccine development become much more easy and simple. Imagine a world where producing food will be much more sustainable and economical.

What if I told you that we can use proteins to achieve all of this? This is the world I want to live in with Scala Biodesign.

Proteins are the engines of all organisms, from bacteria, plants, to humans. They make all the amazing phenomena of what we call life.

The problem is, the proteins are natural. They are not optimized for industrial use.

For companies to engineer a single protein can take more than a year. It costs millions of dollars.

It's a very complex process. The number of combinations is extraordinary, 20 to the power of 300.

The most painful part is it just very often fails.

So now imagine that you can make a few tweaks into these proteins and then make them do exactly what you need.

We want to engineer the proteins at a fraction of the time and cost, and to make these protein engineering projects just succeed.

My co-founder, Ravit, is first of all, a foodie, and this is something that made us good friends.

Adi is one of the most idealistic people that I know. We had a very long journey together from a master project that Adi and I did together that was actually a complete failure.

By the way, miserably failed because of unstable proteins, that back then there was no one to engineer them and improve them.

This is why we decided to stay for a PhD, understand the basic physical properties of these proteins and how to engineer them better.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Scala's mission is to empower everyone who wants to develop protein-centered solutions to do so.

Speaker 28

Scala is moving protein engineering from the lab to the computer. We demonstrated time and again that our methods succeed in one shot and really take these proteins to the next level.

Pat Gelsinger
CEO, Intel

What do you think, team two, Scala? Woo-hoo! You know, and AI is fundamentally restructuring science and, you know, so many domains, you know, unleashing new applications, new experiences, and productivity and creativity. You know, but we also believe it ushers in the next era of the PC, the AI PC generation. And as we discussed, you know, we're, we're participating, we're contributing, we're fueling this high-end with Gaudi and Xeon and Max, and getting these biggest machines on Earth for training and refinement. And, you know, my friends, Sam Altman and, you know, Kevin Scott and Jensen, and you know, all of them, the work they're doing at the high end is really cool. And as we, you know, quickly move to trillion-parameter models and beyond, this is amazing and breaking new science.

But I'll tell you what's even cooler, is if we put it in the hands of every human on Earth. You know, making it useful with nimble models that can run on your PC, that enable these models to be literally everywhere, offering personal, private, secure AI capabilities that infuses every aspect of our daily lives at work, at play, at sport, with gamers and our personal assistants, creators, church groups, Zoom calls, fantasy football. I might even be able to beat my kids with that. And these trained models being able to run on any PC, analyzing incoming data in my personal environments as well, prioritizing me, assisting me, and urgently tailored to my experience. You know, when we think about the AI PC generation, I'd like you to hearken back to original Wi-Fi and, the Wi-Fi specs, and I was involved in helping to create Wi-Fi.

You know, we emerged in 1997, and we sort of went through like 5, 6 fallow years. And then in 2003, Intel launched the first-gen Centrino platforms, and it started to drive Wi-Fi at scale, and it gave rise to access points and hardware and comms and new applications, and it initiated a virtual application cycle at the time. We believe the same occurs with AI, where we infuse it into everything with open and secure environments. We see the AI PC as a sea-change moment in tech innovation. You know, Andy Grove called the PC the ultimate Darwinian device, you know, just recreating itself. We've always gone through this question of: What's the killer app? And my simple answer to that is, you. Right? You're gonna be the ones creating these next-generation applications and use cases.

With each passing day, we're seeing more apps open up and, you know, an idea sparking another, and this AI PC performance and capability is the perfect experience at your fingertips. With that, let's take a look at how my AI PC becomes my new superpower. Let's invite Craig on stage to show us a couple of these examples. So, Craig.

Craig Raymond
Principal Engineer, Intel

Howdy, Pat.

Pat Gelsinger
CEO, Intel

You're looking rather dapper today, Craig.

Craig Raymond
Principal Engineer, Intel

Well, thank you. I was about to say the same thing.

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

But thanks for having me. I figured we'd go ahead and take a brand-new look at a couple of the new AI PC applications that are out there. And I know we've all been looking for the killer application, but here's the deal: We are in the middle of our major acceleration cycle. Just like you said, LLMs, OpenVINO toolkits for everyone. And if you don't have the killer app today, wait five minutes, for your AI applications is just right around the corner. So let me show you a couple of quick examples of exactly what that's gonna look like here. So really quickly, I'm gonna go ahead and take a look at this machine.

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

It's one of our brand-new platforms for the AI PC, and... Pretty great! And we're running on this one, Audacity, which has a plugin for Riffusion. So Audacity being an open source music production, and Riffusion being an AI music generator. So I figure we go ahead and kick this off. Hey, hey, Pat, what kind of music are you into?

Pat Gelsinger
CEO, Intel

Well, you know, I sort of like lots of genres. You know, I'm really quite open, but, you know, there's this, you know, from my home county, really close to the town where I was born, there was this other singer. You might have heard from her. Really close, she was born there, Taylor Swift. Have you heard of her?

Craig Raymond
Principal Engineer, Intel

Oh, Taylor Swift.

Pat Gelsinger
CEO, Intel

Yeah.

Craig Raymond
Principal Engineer, Intel

I, I-

Pat Gelsinger
CEO, Intel

She's the other famous person from my home.

Craig Raymond
Principal Engineer, Intel

I think a few of us have heard about that. So, you know, you guys have all heard it here for the first time. Pat's a Swiftie. But it's okay, Pat. I think we're all Swifties. Just some of us don't like to admit it. But let's go ahead. We're going to actually create a song now in the style of Taylor Swift. We'd love to have her come along, but she was busy.

Pat Gelsinger
CEO, Intel

Yeah. Yeah.

Craig Raymond
Principal Engineer, Intel

So-

Pat Gelsinger
CEO, Intel

We didn't invite her. Sorry.

Craig Raymond
Principal Engineer, Intel

Anyway, we're gonna, we're gonna generate our song here really quick, and we'll let that guy cook. Now let's move on to the next one over here.

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

So we're showing up on this machine is GIMP, which is an image manipulation program, open source for everyone, and has a plugin for OpenVINO, of course, which has Stable Diffusion. So let's do text to an image. What we thought was pretty amazing is that currently your wife, Linda, is in Africa-

Pat Gelsinger
CEO, Intel

Yep.

Craig Raymond
Principal Engineer, Intel

doing that amazing school that you built out there and all the philanthropy work. So we would love to do something in the theme of Africa, but put a little fun spin on it if we could, for this one. So we'll go ahead and generate that here. We'll let this guy cook, but-

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

Let's go, let's go ahead.

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

and find out the payoffs. We're both all right.

Pat Gelsinger
CEO, Intel

So let's hear, let's hear Taylor, or sort of, or sort of not.

Craig Raymond
Principal Engineer, Intel

In the style of.

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

You're gonna get sued, Pat. Okay, here we go. Let's go ahead and play this back.

Pat Gelsinger
CEO, Intel

It's not bad.

Speaker 26

I'm not gonna lie, that's pretty good. However, not a single lyric-

Pat Gelsinger
CEO, Intel

Just generated right here, right?

Craig Raymond
Principal Engineer, Intel

Just generated and locally.

Pat Gelsinger
CEO, Intel

Uh-huh.

Craig Raymond
Principal Engineer, Intel

So here's the deal. We're gonna see ton of hybrid models-

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

But on your AI PC, you're actually going to be able to actually run some of these locally as well, which is a great way to keep all your IP secure. But let's move on over to our other system here, and we've generated our picture. Is this the giraffe in a cowboy hat that you were looking for, Pat?

Pat Gelsinger
CEO, Intel

You know, Linda loves giraffes. That's her favorite, you know, plains animal there, so she's gonna love this one.

Craig Raymond
Principal Engineer, Intel

I think we absolutely nailed it.

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

These are the type of applications that we're really gonna be looking for this next generation here.

Pat Gelsinger
CEO, Intel

But, one more thing, Craig. That's like the ugliest looking machine I have seen in a long time, and I've seen some ugly ones. So what are you doing there?

Craig Raymond
Principal Engineer, Intel

Well, what, this guy with the test mode and the blue thing?

Pat Gelsinger
CEO, Intel

Yeah, yeah, yeah.

Craig Raymond
Principal Engineer, Intel

You know, just don't worry about this guy. The beauty is on the inside, Pat.

Pat Gelsinger
CEO, Intel

Okay, okay.

Craig Raymond
Principal Engineer, Intel

This is actually the world's first showing of our Lunar Lake system.

Pat Gelsinger
CEO, Intel

Woo-hoo!

Craig Raymond
Principal Engineer, Intel

I know, right? I bet you didn't see that coming, you know?

Pat Gelsinger
CEO, Intel

Yeah.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

So this is instead of doing the first boot like we normally do, Pat is always pushing us to do really all the way to the envelope with our demonstrations here. So we said, "What's better than the next gen show-off? Well, we'll just do the next, next gen show-off.

Pat Gelsinger
CEO, Intel

Okay.

Craig Raymond
Principal Engineer, Intel

So here we are, Lunar Lake, stable and up and running. Not just first booting, but running the AI applications that are gonna be powering us for the next generation of these AI PCs, Pat. It's gonna be great.

Pat Gelsinger
CEO, Intel

Okay. Thank you, Craig.

Craig Raymond
Principal Engineer, Intel

Thank you so much, Pat.

Pat Gelsinger
CEO, Intel

Well, a little bit of unexpected demo wear there, so... You know, right as we bring this age of the AI PC, you know, we're, you know, thrilled by the momentum that we're seeing, and we're gonna bring millions of AI-enabled PCs, ramping to tens of millions, to hundreds of millions, to enable tens of billions of TOPS of capabilities. And we're working with the industry and our OEM partners to make these sustainable, energy-efficient, you know, platforms. And the journey begins with our upcoming new Intel Core Ultra processor launch, formerly Meteor Lake. I'll probably goose it up one or twice in the keynote yet, you know, but now it's the Intel Core Ultra. And we're excited because this brings a whole set of new capabilities, the NPU capabilities, and launching December 14. And we're excited to have this in volume with numerous customers and partners.

But this journey of the AI PC is a big deal. You might even say that we might need some copilots for the journey, and in fact, our friends at, Microsoft copiloting this, journey with us and the Windows 11 PC with Core Ultra, even more powerful and personal with Windows Copilot. And Microsoft is planning to release broadly soon, no, really soon, their copilot capabilities that are gonna enable this ability to rewrite, to summarize, explain, content across a range of questions and use cases right there on your AI PC, taking complex to simple, all done on your AI, platform. And Meteor Lake, or the Intel Core Ultra, is a tour de force. And as you look at all the things on this slide, this is the first client that's manufactured on Intel 4 process, technologies.

So on our 5 nodes and 4-year journey, 7 and 4, check and check. This is the first platform that we're using EUV, the most advanced lithographic capabilities. It's also our first chiplet to be used using Foveros, our advanced 3D packaging technology. It's our first to bring CPU, GPU, and NPU onto a single platform with energy and power efficiency, and we're gonna deliver it at scale. You know, to do that, it's power efficient and gonna be delivered into the price points that enable high-volume deployment. Our NPU will enable AI developers to take advantage of the standard software and frameworks for AI development and hugely expand the applications for edge deployments. But don't just take my word for it. Let's hear from some of our ecosystem partners in the upcoming Intel Core launch.

Please join me in welcoming the COO at Acer, Jerry Kao, to the stage, and some of the amazing work that they're doing. Jerry?

Jerry Kao
COO, Acer

Hi, Pat. Yeah. Wow! It's a big pleasure here to be on stage with you. Yeah. Really long time ago friend.

Pat Gelsinger
CEO, Intel

Oh, we've had so many good things with Acer over the years.

Yes. Yes, so many.

Today is another.

Jerry Kao
COO, Acer

Yes. And we're so excited for the Meteor Lake. Sorry, not Meteor Lake, but the Intel Core Ultra.

Pat Gelsinger
CEO, Intel

There we go.

Jerry Kao
COO, Acer

About the prospects, a lot of things, especially with NPU, because NPU will bring AI to PC.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Jerry Kao
COO, Acer

Patty, maybe you don't know, but Acer have been working with Intel for a while to bring the Core Ultra to laptop. And today-

Pat Gelsinger
CEO, Intel

Show us what you got.

Jerry Kao
COO, Acer

You want to take a look?

Pat Gelsinger
CEO, Intel

Ooh.

Jerry Kao
COO, Acer

Okay.

Pat Gelsinger
CEO, Intel

Okay.

Jerry Kao
COO, Acer

This is a sneak peek of our upcoming laptop with Intel Core Ultra. We named Acer Swift series.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Jerry Kao
COO, Acer

Of course, it's a thin and light, and also thin and powerful notebook with all the thin and light features, longer battery life and very lightweight. In addition to that, the most beautiful thing is this Intel Core Ultra inside. It means strong CPU, GPU, and NPU means AI PC.

First time we saw AI PC in the world, and not waiting for that kind of a box, Lunar Lake, many years later. No, not many years, but just a few... Just, yeah, but-

Pat Gelsinger
CEO, Intel

One year.

Jerry Kao
COO, Acer

This one. It's coming. It's coming.

Pat Gelsinger
CEO, Intel

Uh, yeah.

Jerry Kao
COO, Acer

Talk about AI PC. You know, Pat, AI, for a lot of people, it's just hardware, but that's wrong.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Jerry Kao
COO, Acer

Because AI, the most important thing is the software to unlock the power of the hardware. So for AI portion, actually, Acer have been working with Intel to develop a suite of the applications for end user to enjoy the AI, starting from they boot up the computer.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Jerry Kao
COO, Acer

In addition to that, by also working with Intel for OpenVINO tools, we also created AI libraries so that end user, or I should say, developers, create a lot of AI applications based on those libraries. Yeah, and talk a lot. It's showtime.

Pat Gelsinger
CEO, Intel

Okay, let's see it.

Jerry Kao
COO, Acer

Yeah, showtime.

Pat Gelsinger
CEO, Intel

Yeah.

Jerry Kao
COO, Acer

Okay, let's try to make a backgrounder for this laptop.

Pat Gelsinger
CEO, Intel

Okay.

Jerry Kao
COO, Acer

Craig is going to run the demo, and what we're seeing is exactly identical laptop here. Yeah, we're going to use the picture of the Ballerina there, and to add some stable image.

Pat Gelsinger
CEO, Intel

Okay.

Jerry Kao
COO, Acer

You know, originally, people think about using yours or my picture there, but my team convinced me not to do that.

Pat Gelsinger
CEO, Intel

Okay.

Jerry Kao
COO, Acer

Yeah. Ballerina is there. Okay, what we're going to do now is using Stable Diffusion-

Pat Gelsinger
CEO, Intel

Okay

Jerry Kao
COO, Acer

And also with a plugin for GIMP, which we developed with OpenVINO, to harness the CPU and GPU and NPU on a platform to create an astronaut with the same pose as the Ballerina.

Pat Gelsinger
CEO, Intel

Excellent.

Jerry Kao
COO, Acer

And then we'll update the scale-

Pat Gelsinger
CEO, Intel

Uh-huh.

Jerry Kao
COO, Acer

to the high definition, because you're going to use a wallpaper.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Jerry Kao
COO, Acer

Okay, Craig, are you ready?

Pat Gelsinger
CEO, Intel

Okay.

Jerry Kao
COO, Acer

Please press the button.

Pat Gelsinger
CEO, Intel

Yeah. All right. Okay.

Craig Raymond
Principal Engineer, Intel

This has been generated. Now we're going to go ahead and minimize. Let's go ahead and press that button. There we go.

Jerry Kao
COO, Acer

Okay, I think, ladies and gentlemen rofessional people like you guys here, you should know how long it normally take to run Stable Diffusion on a traditional CNN notebook. But today with Intel Core Ultra, it's finished.

Pat Gelsinger
CEO, Intel

Yeah. Yeah, this, you know, I just love Stable Diffusion-

Jerry Kao
COO, Acer

Yes

Pat Gelsinger
CEO, Intel

Some of the capabilities it's gonna unleash. We really see it as one of the game changers-

Jerry Kao
COO, Acer

That's true.

Pat Gelsinger
CEO, Intel

Right, in this field.

Jerry Kao
COO, Acer

That's true. Yeah, and in addition to that, Acer also working Intel for more AI effects.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Jerry Kao
COO, Acer

For example, we have so-called the Acer Parallax View, which will add the motion to the picture, like what an astronaut is going to do.

Pat Gelsinger
CEO, Intel

Yep.

Jerry Kao
COO, Acer

Yeah.

Pat Gelsinger
CEO, Intel

We're seeing it move in real time.

Jerry Kao
COO, Acer

Yeah, it's moving.

Pat Gelsinger
CEO, Intel

We even do that based on camera.

Jerry Kao
COO, Acer

That's true.

Pat Gelsinger
CEO, Intel

As well. Yeah.

Jerry Kao
COO, Acer

Yeah. The Acer, the Parallax view, can also using the notebook camera to track people's face-

Pat Gelsinger
CEO, Intel

Mm-hmm

Speaker 27

to change the perspective of the image.

Pat Gelsinger
CEO, Intel

Uh-huh.

Jerry Kao
COO, Acer

You're gonna have a 3D look and feel feeling.

Pat Gelsinger
CEO, Intel

Yeah.

Jerry Kao
COO, Acer

Yeah.

Pat Gelsinger
CEO, Intel

Yeah. It's just incredible, Jerry, and we're so excited to do the launch with you later in the year.

Jerry Kao
COO, Acer

That's true.

Pat Gelsinger
CEO, Intel

You know, just game-changing platform.

Jerry Kao
COO, Acer

Yeah. Yeah, yeah.

Pat Gelsinger
CEO, Intel

You know, thank you so much for joining.

Jerry Kao
COO, Acer

Thank you very much.

Pat Gelsinger
CEO, Intel

Yeah, I get to keep this one, right?

Jerry Kao
COO, Acer

No, no, no, no, no, it's mine. Not until December fourteenth, I'll give it to you. Thank you very much.

Pat Gelsinger
CEO, Intel

Thank you to Jerry of Acer. But this is just the beginning. OEMs, ISVs, there are all of these, capabilities, and, you know, we want the AI PC to realize your visions and dreams, and we're working on the future generations of processor, you know, including Arrow Lake and the first demonstrations of Lunar Lake. And I'm particularly excited that, you know, hot out of fab, right, you know, Arrow Lake is on our Intel 20A process technologies. This wafer's still a little bit warm, you know, straight out of fab. But the first demonstrations of our 20A process technology, with further improvements in performance, power, area, working as expected, and we're excited to show these, capabilities, to you, today. So first ever demonstration of Lunar Lake, first silicon arriving and healthy on, Arrow Lake.

What follows that for our 2025 offering is Panther Lake, and design is well underway. In fact, it's so well underway that we'll be sending it to fab in Q1 of 2024, where we'll have the first fab of Panther Lake underway as well, and that's on Intel 18A, you know, the finish line of our five nodes in four years. And, you know, as we see these innovations continuing to push us forward, you know, what is the most important unit of optimization for us as a technology industry? Simply performance per unit and of energy. Every segment of our markets, whether it's data center, it's edge, it's telco, right, it's PC, right, you know, cloud, AI, they're all power-constrained. So we have to optimize our energies toward how we deliver sustainability and performance in our next generation of Intel products.

Literally, we're developing the technology across our product lines to reduce energy consumption. And this is clearly the case for our client products, but it's also in our server products. In Xeon processors, you know, we are now well underway. The execution machine is back at Intel, and we're seeing predictable, stable cadence of new products to meet the data center needs. And over the next several months, we got exciting stuff coming out, and we're going to be introducing better performance, efficiency, and TCO across the product line. And next up is Emerald Rapids. And with Meteor Lake and Emerald Rapids, these are the last products on Intel 7. And with our Core Ultra launch, we'll be releasing Emerald on December 14.

The new Xeons bring huge power performance for watt improvements, increased number of cores, faster memory technologies, but in the same power envelope as today's 4th Gen Xeons, but providing up to 40% more performance on key workloads like AI in that same socket and that same power envelope. And because of that compatibility at the software and at the socket level, we expect to see a rapid move by OEMs and software developers to take advantage of it. You know, and 5th Gen Xeon comes with the same and, and further refined versions of our accelerators, AI capabilities for these next-gen workloads. 5th Gen Xeon coming December 14th. But 2024, for the Xeon roadmap, really good. And, with that, I'm quite excited.

We view many of the products that we have underway, and the next-gen server platform gives us the opportunity to bring both E-core and P-core, efficient cores and performance cores, and these are named Sierra Forest and Granite Rapids, and both of these are progressing on or ahead of schedule. And this gives simplicity and flexibility for system designers designing one platform to be able to bring both of these products into the marketplace. The package has the same I/O die that delivers compatibility at the software and hardware level for things like PCI Gen 5 and CXL, you know, 2.0. You know, the Sapphire Rapids product, right? And this is on the Intel 3 process technology, you know, provides 2.5 times the rack density and 2.4x the power performance over fourth-gen Xeon.

This specifically is unique and beneficial, you know, for cloud scale workload kind of capabilities. So really excited about this work. Granite Rapids is a more balanced machine for peak performance and AI capabilities, as well as major improvements, 2-3x better than Gen 4 Xeon, you know, for the broad data center workloads. We're already well underway on the next version, the Clearwater Forest, an 18A version, right, of the E-core product to arrive in 2025. The roadmap is healthy. We are executing well, but we kept a little secret. You know, on the Xeon engineering team, you know, they're always a little bit creative bunch and, you know, always have a few things up their sleeve. "No, no, we can't do that, boss." "Yes, you can." "No, you can't." And then they surprise me when they can.

And when we showed off Sierra Forest earlier in the year, we showed 144 cores per piece of silicon. What we didn't tell you was that we had two die per package. So we have a whopping 288 cores on one Sierra Forest product line, with 12 channels of memory and further improvements for cloud-optimized environments. Huge gains for cloud-scale customers. And, you know, I remember when we produced the first 4-core products. 288 cores? Wow, I must be getting old or something. This is really incredible gains. 2024 is shaping up to be a really, really good year for the CPU and our Xeon customers. But we couldn't talk about data center computing and this proliferation of capabilities without touching on security.

You know, as we think about the criticality of security at the data center, at the edge, you know, the common denominator underneath it is protecting my apps and my data. You know, this underlayment of the superpowers. Technology is neutral, right? Neither good nor bad. We can use it to create great things. We can also use it to power cyber threats. We have to protect our data at rest, in-flight, and in use. The seamless integration of technology into our lives is opening up more attack surfaces and vectors than we've ever seen before. With that, we've been building in increasing capabilities into our silicon platforms to enable secure, confidential computing. Simply put, security starts with the silicon. Security starts with Intel. I began this work almost 25 years ago in my career.

You might say, "Wow, man, this is hard work." And we've just been building out more and more silicon-based capabilities. And tomorrow, Greg, in his keynote, will announce a number of new capabilities and services, specifically in the area of security based on Intel. And this, to us, is super exciting. I encourage you all, be here tomorrow. Don't miss Greg's keynote as we unveil that... You know, we've talked about the superpowers. You know, this idea of these five technology superpowers just invading everything. Compute, everything becomes a computer. Connectivity, everyone and everything is connected. You know, infrastructure, the unlimited scale of the cloud, combined with the unlimited reach and intell-- of the intelligent edge, simultaneously addressing latency and higher bandwidth. You know, and AI, this intelligence everywhere, you know, being able to take this data and compute and, you know, algorithmic breakthroughs, software writing software at scale.

The last of these, sensing. Breakthroughs in low-cost, high-resolution sensors, you know, I believe are just opening up other pathways to bring technology into our everyday lives, bringing more data and, capabilities. We're seeing, advances in automation, processing, robotics, giving machines human-like, capabilities, where even our disabilities become digitally enhanced strengths or superpowers. One of my favorite sounds is, hearing my granddaughter call me, "Papa. Papa Pat." If it were not for my hearing aids, and I have a family that, you know, almost every one of them, right, has lost their hearing, I might not be able to hear that in the future. I believe in this area of the superpowers of sensing, we have so much more to do. With that, you know, I'm super excited to find a soulmate on this journey.

You know, Dan Siroker, the founder of Rewind.ai. Please join me in welcoming Dan to the stage.

Dan Siroker
Co-Founder and CEO, Rewind AI

Hey, Pat. Thanks for having me.

Pat Gelsinger
CEO, Intel

Hey, thank you, Dan. Tell us your story a little bit.

Dan Siroker
Co-Founder and CEO, Rewind AI

So I started to lose my hearing in my 20s, and when I turned 30, I tried a hearing aid, and it was magical. To lose a sense and gain it back again feels like gaining a superpower. And ever since that moment, I've been on a hunt for ways that technology can augment human capabilities and give us superpowers, and that's what led me to Rewind.

Pat Gelsinger
CEO, Intel

What's Rewind doing?

Dan Siroker
Co-Founder and CEO, Rewind AI

So Rewind is a personalized AI powered by everything you've seen, said, or heard. The way it works is it captures your screen and your audio, it compresses it, encrypts it, transcribes it, and stores it all locally on your PC. And then, best of all, you can ask any question of anything you've seen, said, or heard.

Pat Gelsinger
CEO, Intel

Well, that's super cool. Earlier, we talked about this age of the AI PC beginning now, you know, and this ability to capture everything you see here and, you know, be able to transcribe, analyze... You know, I mean, talking about it, can you show me?

Dan Siroker
Co-Founder and CEO, Rewind AI

Sure, I have a machine. Let's go.

Pat Gelsinger
CEO, Intel

Okay.

Dan Siroker
Co-Founder and CEO, Rewind AI

Let's check it out.

Pat Gelsinger
CEO, Intel

Let's take a look here.

Dan Siroker
Co-Founder and CEO, Rewind AI

All right, here's Rewind, and to show you how it works, I'm gonna pull up the timeline. This is what many of us were probably doing last night, looking around and seeing different sessions we might want to attend here at the conference. I can just rewind back and forth in time, like a DVR, but the real power comes from asking questions. So here I'm going to ask Rewind, "When and where is the session on chatbots?" And what Rewind's gonna do, it's gonna go back through my memories, things I've captured on my machine-

Pat Gelsinger
CEO, Intel

Okay.

Dan Siroker
Co-Founder and CEO, Rewind AI

We'll find that exact moment. Here it says it's the session on chatbots entitled Demystifying Generative AI. Show up to it, it's tomorrow at 12:45 P.M. It'll be in Clubhouse B, and it'll... Here's a little summary of the actual session.

Pat Gelsinger
CEO, Intel

Hey, that's super cool.

Dan Siroker
Co-Founder and CEO, Rewind AI

And so this is a great example of going back to a specific moment in your past, but maybe instead, you want it to actually do some work for you.

Pat Gelsinger
CEO, Intel

Okay.

Dan Siroker
Co-Founder and CEO, Rewind AI

So one of our investors is Sam Altman. I'm just gonna ask Rewind to do my job, which is write me an email to Sam Altman, asking him to catch up. And here-

Pat Gelsinger
CEO, Intel

Okay.

Dan Siroker
Co-Founder and CEO, Rewind AI

And here-

Pat Gelsinger
CEO, Intel

I was talking to Sam last week, so this is good to see. So let's see what we come up with.

Dan Siroker
Co-Founder and CEO, Rewind AI

Yeah. So here it's going through all of my experiences, my, prior conversations. It mentions that we've raised $33 million from top-tier investors, including him, and gives me a little, template. I can just copy and paste, send him an email. I don't have to actually think about it.

Pat Gelsinger
CEO, Intel

Mm-hmm, and this is leveraging Core and OpenVINO, right?

Dan Siroker
Co-Founder and CEO, Rewind AI

Yeah. So what I showed you here today was using GPT-4.

Pat Gelsinger
CEO, Intel

Okay.

Dan Siroker
Co-Founder and CEO, Rewind AI

But what's even better is if we could do this entirely locally. So for the first time ever, I'm gonna show you a demo of a personalized AI, powered entirely locally on your AI PC using OpenVINO. So let me switch our mode here from GPT-4 to OpenVINO.

Pat Gelsinger
CEO, Intel

Okay, so now everything's gonna run locally.

Dan Siroker
Co-Founder and CEO, Rewind AI

Yeah, and actually, to prove it to you, you know what I'm gonna do? I'm gonna actually turn off our Wi-Fi.

Pat Gelsinger
CEO, Intel

Okay.

Dan Siroker
Co-Founder and CEO, Rewind AI

Hopefully, this will work. I'm gonna turn off the Wi-Fi.

Pat Gelsinger
CEO, Intel

Okay, we're gonna brick it, you know, if it comes-

Dan Siroker
Co-Founder and CEO, Rewind AI

Yes.

Pat Gelsinger
CEO, Intel

Yeah.

Dan Siroker
Co-Founder and CEO, Rewind AI

So this machine is entirely off of the network, and I'm gonna ask, you know, a very simple question: What is Pat's favorite sound?

Pat Gelsinger
CEO, Intel

Okay. So now we're running locally, right? Using OpenVINO Ultra-- Core Ultra.

Dan Siroker
Co-Founder and CEO, Rewind AI

Exactly.

Pat Gelsinger
CEO, Intel

Right.

Dan Siroker
Co-Founder and CEO, Rewind AI

Yes, and so it's gonna take the data that's from your... Oops, excuse me. Try that one more time. It's gonna take data that's from the machine, and here you can see-

Pat Gelsinger
CEO, Intel

Okay.

Dan Siroker
Co-Founder and CEO, Rewind AI

It knows that your favorite sound is your granddaughter's voice calling you, "Papa.

Pat Gelsinger
CEO, Intel

Okay, very good. So, you know, you know, you know, this to me is killer app domain. I am so excited about these capabilities and the ability to run them all locally. This is my data locally on my machine, right? You know, I don't worry about any of the privacy or other things associated with it, but selectively leveraging the cloud when we need to.

Dan Siroker
Co-Founder and CEO, Rewind AI

That's right. Exactly. Yeah, and I'll just show you one last example. This is an example of using summarization, where, I'm gonna ask it to summarize this keynote, and just for fun, we'll just say, "Use emojis." And we'll see what it comes up with. And here it's, We sped this up a little bit, but you'll see it's, it's saying that it's an incredible keynote, by, by Pat over here. Future of technology is looking br-great. Here, there's a starstruck emoji, so the model likes you. And wait, there's more. We're sharing details of the client performance, roadmaps, et cetera. So this really shows you the power of an AI PC, leveraging the data of everything you've seen, said, or heard, and truly giving you superpowers.

Pat Gelsinger
CEO, Intel

... Hey, this is so good. Thank you so much for joining us, Dan. Super excited for Rewind.

Diana Blazek
Product Manager, Intel

Thank you, Pat.

Pat Gelsinger
CEO, Intel

You know, as you saw with Rewind, leveraging Core Ultra and OpenVINO and transforming lives and improving accessibility. But until recently, PCs couldn't connect to hearing aids like mine because the traditional Bluetooth simply used too much power. Well, recently, Bluetooth Low Energy Audio that we've worked on with Microsoft is first coming and available since earlier this year, and part of the Core Ultra platform. And we've been collaborating with Starkey Labs, and these hearing aids are right like the ones that I'm wearing here, you know, to create a POC for how AI can improve the hearing aid experience. So we're going to join a call here with Arnaud on my Samsung Galaxy Book. So, Arnaud, how are you?

Arnaud Lambert
Director of Product Management, Starkey Labs

Hello, Pat. I trust your keynote is doing well?

Pat Gelsinger
CEO, Intel

Yeah. I trust you're keeping track of all the great things I have to say, Arnaud.

Arnaud Lambert
Director of Product Management, Starkey Labs

Yeah, I'm watching from backstage. So now your PC is contextually aware and will automatically switch your hearing aids between ambient aware and focus mode-

Pat Gelsinger
CEO, Intel

Uh-huh.

Arnaud Lambert
Director of Product Management, Starkey Labs

by adjusting the sound amplification. You can stay in your flow without missing any important information.

Pat Gelsinger
CEO, Intel

Okay, so when I can go up here, and I can switch from focus mode, you know, to ambient mode, and now I can hear the other things going on. And in focus mode, it's just you and I, Arnaud.

Arnaud Lambert
Director of Product Management, Starkey Labs

Yes, Pat, we won't amplify the background noise, so you can concentrate on our conversation. But we'll make sure you don't miss any important interruption, such as if a delivery arrives.

Pat Gelsinger
CEO, Intel

Well, okay, maybe the grandkids are at the door. Hopefully, my wife gets it, so we'll dismiss that there. So, you know, stay focused just on you and I-

Arnaud Lambert
Director of Product Management, Starkey Labs

Yes.

Pat Gelsinger
CEO, Intel

But the PC is still detecting.

Arnaud Lambert
Director of Product Management, Starkey Labs

Yeah, your PC recognized the delivery and alerted you, but you choose to ignore it, and the PC decided to keep the hearing aids in focus mode.

Diana Blazek
Product Manager, Intel

Excuse me! Excuse me.

Pat Gelsinger
CEO, Intel

Okay. Well, okay. I think I'm going to go talk to whoever's here. Just a second, Arnaud.

Arnaud Lambert
Director of Product Management, Starkey Labs

Sure.

Pat Gelsinger
CEO, Intel

Hey, Diana.

Diana Blazek
Product Manager, Intel

Hey. Hey, Pat.

Pat Gelsinger
CEO, Intel

What's up? Why are you interrupting my demo?

Diana Blazek
Product Manager, Intel

Do I need a reason? It's always a good time to talk to me.

Pat Gelsinger
CEO, Intel

Oh, but I'm sorry.

Diana Blazek
Product Manager, Intel

Thank you.

Pat Gelsinger
CEO, Intel

I'm sorry.

Diana Blazek
Product Manager, Intel

Thank you for taking the interruption.

Pat Gelsinger
CEO, Intel

Okay. Okay.

Diana Blazek
Product Manager, Intel

Yeah, thanks for taking the interruption. Actually, I think I might have wasted my time waving, because I think your computer actually heard the sound of me saying, "Excuse me.

Pat Gelsinger
CEO, Intel

Mm-hmm. Mm-hmm.

Diana Blazek
Product Manager, Intel

It even told you which direction that sound was coming from, so I just wasted a lot of energy. But I'm so glad you came over.

Pat Gelsinger
CEO, Intel

Thank you.

Diana Blazek
Product Manager, Intel

I think you've got a really great, great system over there. It sounds like with the head tracking AI, there, it actually detected when you left the conference call, and so your hearing aids were automatically switched to ambient mode, so we could chat.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Diana Blazek
Product Manager, Intel

Then when you go back, it's going to, it's going to automatically switch your hearing aids back to focus mode, so you can be focused again on talking to Arnaud. So pretty, pretty cool technology there.

Pat Gelsinger
CEO, Intel

Very nice.

Diana Blazek
Product Manager, Intel

Tracking AI is actually running on the NPU and the Intel Core Ultra. Another thing we have running on the NPU is an AI summarization technology. It's actually, as you're talking to me, it's over there-

Pat Gelsinger
CEO, Intel

Yeah

Diana Blazek
Product Manager, Intel

- visually summarizing and condensing.

Pat Gelsinger
CEO, Intel

So Arnaud's still talking, and it's summarizing-

Diana Blazek
Product Manager, Intel

Arnaud is still talking, and it's summarizing, it's condensing it for you, and so you can just go back there, and you'll be able to jump in really quick. You'll know exactly what transpired while you were gone. Very nice, very sleek, and yeah.

Pat Gelsinger
CEO, Intel

Okay.

Diana Blazek
Product Manager, Intel

Great solution.

Pat Gelsinger
CEO, Intel

Can I go back now, or are we-

Diana Blazek
Product Manager, Intel

Um, okay.

Pat Gelsinger
CEO, Intel

Okay.

Diana Blazek
Product Manager, Intel

You can go back.

Pat Gelsinger
CEO, Intel

Thanks.

Arnaud Lambert
Director of Product Management, Starkey Labs

Cette composante d'intelligence artificielle pour fonctionner efficacement en temps réel et réduit la charge sur le CPU.

Pat Gelsinger
CEO, Intel

So, Arnaud, I'm back here now.

Arnaud Lambert
Director of Product Management, Starkey Labs

Oh.

Pat Gelsinger
CEO, Intel

Right? And Diane was helping, and we were chatting, but you were talking in French when I was gone?

Arnaud Lambert
Director of Product Management, Starkey Labs

Yes. Sorry, Pat, I was explaining the demo in French. Oh, sorry.

Pat Gelsinger
CEO, Intel

So, now not only am I getting real-time summarization of what I missed when I went out of Zoom, right? It brings me back to focus mode when I'm back to the PC, and it also did translation from French in real time as well. What do you think? Is this the AI PC generation?

Arnaud Lambert
Director of Product Management, Starkey Labs

Thank you, Pat.

Pat Gelsinger
CEO, Intel

And if that wasn't good enough, right, obviously, we have other sensing deficiencies. In the future, I want my glasses to become AR-enhanced glasses as well. And this is the next generation we call a Visor from Immersed. They're working on to have these enhanced by PC capabilities as well, to make them smaller, lighter, and eventually as small as my glasses and light as them are today, becoming my sensing devices for vision, hearing, and every other aspect of human existence. So you know, thank you very much for seeing this, you know, glimpse into what sensing of the future will be like and how the AI PC will enable that. Now, you know, as we continue our technological breadth, our third Ignite submission is actually a little bit more galactic.

What Antaris is working on is, how can they bring space and satellite technology ecosystem to make it open and broadly available, and how they're using AI and ML to onboard satellite software? So let's see our third submission, Antaris.

Speaker 28

Imagine a world where more and more satellite resources are actually helping us to monitor and improve and mitigate climate change, where satellites are improving agricultural yields and productivity, where we have transparent information as to what is happening globally. Imagine a world where you can design, simulate, and operate a virtual satellite in a matter of minutes. ... The biggest problem facing the space-based industry today is that it's not an open market.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

Designing and building a satellite is very expensive, and it takes a long time.

Speaker 28

It's still proprietary, closed stacks of hardware and software.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

Very few people can do the design.

Speaker 28

Our vision at Antaris is to show people what's possible with software-driven engineering for satellites and space systems.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

To really unblock access to space.

Speaker 28

I was the chief operating officer at a company called Planet Labs that was doing very innovative things with small satellites, and I brought Karthik in as the chief technology officer there.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

With just 30 minutes of conversation, we just clicked.

Speaker 28

We just are very aligned with respect to our values, with respect to our work ethics.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

We put work at first, and we solve the big problems.

Speaker 28

The goal of our software platform at Antaris is to cut life cycle satellite operating costs by a factor of 10 and cut time to orbit by a factor of 2.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

Antaris cloud platform provides a set of tools that is easy to interact with. You get to see it, you get to operate it, you get to simulate and test it.

Speaker 28

Make all your mistakes in the cloud before you actually build the first satellite or even turn the first group. And then when the satellite goes on orbit, you use that very same interface to manage the real satellite. The Earth is changing constantly. The more that we can monitor and understand that change and what's driving it, the better off we're going to be.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

That is an essential mission of what Antaris is all about.

Speaker 28

What it means is that we can actually put more satellites on orbit to deliver more services to more people at lower cost.

Pat Gelsinger
CEO, Intel

So what do you think? Candidate number three. And just in a little bit, we'll tell you who number one is for today. You've heard us throughout our conversation about OpenVINO and, you know, how we're using it at the edge, and you know, how we're using it with Jerry and Asher and Dan from Rewind and Starkey Labs. You know, and OpenVINO is Intel's AI inferencing and deployment runtime platform, you know, that we spoke about last year. You know, we've now released OpenVINO 2023.1, providing broader applications of support, more natural language processing, computer vision, generative AI. You know, bringing us closer to, you know, this vision of any model, any hardware, anywhere. You know, we're now already deploying optimized integrations of capabilities like Llama 2 models from OS and Clouds, you know, to client.

We've seen a 90+% increase in OpenVINO downloads in the last year, and we're seeing this being the platform to enable, right, and deploy AI inferencing at the edge. You know, good news, we're well on our way to write once and bring AI everywhere. And we're working now to broaden the ecosystem and industry. You know, and edge is diverse applications, but also heterogeneous architectures. And we're seeing a broader set of different platforms. And while many of those are x86 and Intel-based, you know, the Arm relationship, you know, is expanding nicely with Intel, but also many of those edges are based on Arm as well. So we're quite excited today to announce that Arm is supporting the OpenVINO platform. You know, earlier this year, Qualcomm said they're going to show Llama 2 on their AI implementations on smartphones and PCs starting in 2024.

I say 2024? You just saw it running already from Edge to client to cloud. OpenVINO on Arm and x86. And as you look at this picture, right, you see that, boy, the data center offerings transform how companies operate. You know, the PC operating offerings transform how people work, but delivering Edge transforms how everything operates. And we're going to see this increasing, I'll say, hybrid operation, where, you know, big training and model creation, but we need to deploy nimble models running on client and Edge. And they may be tethered to and updating and retraining, you know, but the constant interaction is at the Edge. And this is what we call hybrid AI, and this allows you to experience the models that truly deploy broadly at the Edge.

And for the developer, community, and partners, you now have the tools for success at the Edge. And with our hybrid AI SDK that we'll be releasing very early, next year, this is building you the capabilities so that low-code and no-code environments can take advantage of hybrid AI. And while Gen AI is in the spotlight today, this is just a little piece of what we're going to be able to do in the future. And we're changing the status quo with powerful capabilities, with new text analysis, language, visual recognition, chatbot interactions. And, you know, to broadly deploy, we need Edge affordable hardware. We have to keep enhancing the hardware capabilities for performance and accessibility.

We're making great strides in this optimized runtime environment, enabling new and more capable hardware at the edge, you know, like our Core Ultra, and enabling the edge applications without modification to take the latest advancements and benefits of CPU and GPU and NPU. That's what OpenVINO does. You know, for the past 15 years, you know, the dominant developer model has been cloud native, and this has enabled this separation of hardware from business logic and application innovation. And we believe that the next decade or two of development isn't cloud native, it's edge native. And that's going to be driven by what I like to call the three laws of edge and AI computing. The laws of physics, latency. I can't go to the cloud; I need it locally.

Economics, the cost in cloud, the cost of cloud and cost of bandwidth, and finally, the laws of the land, data sovereignty. We believe these three drive the edge and AI era, and the next phase of application is edge native app development. But to do that broadly, we need a lot of this, you know, plumbing, this hard work of making the edge accessible, remotable, updatable, and this is what Project Strata does. It brings this edge native software platform with services and support, and we're bringing together an ecosystem of Intel and third-party apps to enable this edge environment, and with that, to be able to solve many of these problems. And, you know, solving, you know, things like zero-touch management, security, patching, and updates in real time.

You know, all of that hard stuff in the platform so that app developers can take advantage of the latest hardware, latest security, and through that, Project Strata is enabling this ability to onboard, orchestrate, and observe, and operate. And we'll be launching this in early 2024. So, you know, as we're continuing this cycle of innovation, we in the tech industry, what do you think? We're pretty cool folk, don't you think? My grandkids, they'll think I'm ve ry cool. You know, they say I need cooler clothes that fit better, right? And, for that, I never really cared too much about that. You know, I mean, a geeky T-shirt, and I'm a happy guy. But we also are seeing AI intervene in fashion and new capabilities as well.

With that, you know, please join me in inviting Meera Bhatia, the COO of Fabletics, and how we combine AI and modern fashion capabilities. Meera?

Meera Bhatia
COO, Fabletics

Pat, nice to meet you.

Pat Gelsinger
CEO, Intel

Thank you, Meera. So nice to have you with us today.

Meera Bhatia
COO, Fabletics

Nice to be here. Thanks for having me.

Pat Gelsinger
CEO, Intel

So, you know, tell us a little bit more about what we're doing and what we're showing off here.

Meera Bhatia
COO, Fabletics

Yeah. So I am here to help you find the right fit for your clothes, so your grandkids can think that you look more fashionable.

Pat Gelsinger
CEO, Intel

Okay. Okay.

Meera Bhatia
COO, Fabletics

We are going to be using the FitMatch—FitMatch Concierge solution. As you know, Pat, we're an investor in FitMatch, and we believe in the team, but we believe in the problem that they're trying to solve for retailers everywhere.

Pat Gelsinger
CEO, Intel

How does it work? Tell us about it.

Meera Bhatia
COO, Fabletics

So, our partner, FitMatch, with them, we are revolutionizing the retail industry by solving the universal fit problem. So how many times have you gone into a store, and you see something on a hanger, and you think, "That's going to look great on me." But then you get it into the fitting room, and it just doesn't fit your body shape, right? It's kind of demoralizing. It's frustrating for everyone. And for a retail business like Fabletics, that means missed revenue, additional cost for returns, wasted production. That's bad for the planet.

Pat Gelsinger
CEO, Intel

Yeah.

Meera Bhatia
COO, Fabletics

We don't want to do that, right? So with the FitMatch Concierge solution, we are trying to put an end to the adage of, "It just looked better on the hanger.

Pat Gelsinger
CEO, Intel

Mm-hmm. Well, you know, I think we can all sort of relate to that experience. I hate going to stores partially for that reason. So, you know, can you,

Meera Bhatia
COO, Fabletics

It's all right.

Pat Gelsinger
CEO, Intel

give us a little bit of a demonstration here? You know, and what we're seeing is that the environment is powered by Intel's RealSense cameras using our LiDAR technologies. It creates a digital twin, right? It's running on our core CPUs and using the latest PyTorch AI, all of that running on OpenVINO.

Meera Bhatia
COO, Fabletics

All the stuff.

Pat Gelsinger
CEO, Intel

Like we were just talking about. So, tell us more.

Meera Bhatia
COO, Fabletics

All right. So basically, a customer can walk into a dressing room that's equipped with the FitMatch technology. They complete a full-body scan that remains 100% private. And then you get a 3D avatar, and then we'll match that with a curated selection of clothing that's going to suit your shape. So it's kind of like a personal shopper and all with the help of Intel architecture and software. And amazing, the entire process just takes seconds.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Meera Bhatia
COO, Fabletics

So it's super streamlined, super quick, and easy.

Pat Gelsinger
CEO, Intel

Well, yesterday, I—we have the booth out here, and I went, and I got tested as well.

Meera Bhatia
COO, Fabletics

Nice.

Pat Gelsinger
CEO, Intel

You know, you can see my avatar up there, and you know, as I got fitted and didn't know my avatar would quite look so good as this. You know, I think we're going to tell my wife we have some shopping opportunities.

Meera Bhatia
COO, Fabletics

All right, well, let's take a look. Let's check out your avatar here. There you are.

Pat Gelsinger
CEO, Intel

Okay.

Meera Bhatia
COO, Fabletics

Looking good.

Pat Gelsinger
CEO, Intel

Okay, that's me.

Meera Bhatia
COO, Fabletics

We can view you from all angles. That's you.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Meera Bhatia
COO, Fabletics

So love it. All right, but then we're going to check out and see some matches for you. So what do we have here? We have a selection of curated shirts-

Pat Gelsinger
CEO, Intel

Mm-hmm

Meera Bhatia
COO, Fabletics

Just designed to fit you. We, we believe these are going to fit you. I know we squashed your soccer career.

Pat Gelsinger
CEO, Intel

Yeah.

Meera Bhatia
COO, Fabletics

But, I mean, how about some better, better-fitting pants?

Pat Gelsinger
CEO, Intel

Okay.

Meera Bhatia
COO, Fabletics

Like, that doesn't hurt, right?

Pat Gelsinger
CEO, Intel

I think so.

Meera Bhatia
COO, Fabletics

So, um-

Pat Gelsinger
CEO, Intel

I think so, yeah.

Meera Bhatia
COO, Fabletics

This is all-

Pat Gelsinger
CEO, Intel

I can look like a soccer player.

Meera Bhatia
COO, Fabletics

Yeah, exactly. We want you to look good working out. That's our, that's our mantra, right? So we selected all of these fits just for you.

Pat Gelsinger
CEO, Intel

Well, hey, that's super good, and, you know, looking at the matches is great as well. You know, and you know, the way that you're transforming the experience is really powerful. And you know, Meera, the work that we're showing, and you know, we encourage everybody to come to the show floor and see what it's like.

Meera Bhatia
COO, Fabletics

Great. Well, we have, we've seen great outcomes. I know FitMatch, our partners, have seen great outcomes, lower returns, higher conversion, all the things we want to see to optimize our revenue as a retailer. So pretty excited about these results.

Pat Gelsinger
CEO, Intel

Well, thank you so much. I'm looking forward to my new gear.

Meera Bhatia
COO, Fabletics

Can't wait to see you in it.

Pat Gelsinger
CEO, Intel

Okay.

Meera Bhatia
COO, Fabletics

Thank you.

Pat Gelsinger
CEO, Intel

Thank you, Meera.

Meera Bhatia
COO, Fabletics

Have a good day.

Pat Gelsinger
CEO, Intel

Thank you.

Meera Bhatia
COO, Fabletics

All right. Thanks.

Pat Gelsinger
CEO, Intel

So we have lots going on here at Innovation and, lots going on in the developer world. But as we said, you know, now is the time. So we have the access to space through Antaris. What do you think? How many of you think they're the best? Okay, come on, wake up a little bit here. Come on. Yeah. Okay. Great. Okay, how about Deep Render, right? And these advancements in compression. Well, how about Scala? So Greg Lavender, myself, Lama Nachman, we spent time, you know, going through all three of these, talking to the, founding teams, you know, discussing with each one of them, and simply put, I'll say, they're all amazing. I just love this, you know, passion, ecosystem of innovation that we get to participate in, unleashing the energy, you know, for each one of them.

Now it's my pleasure to announce this year's Ignite Award winner. This year's winner is Deep Render. Please join me in welcoming to the stage, Chris from Deep Render. And

Speaker 25

I knew that.

Pat Gelsinger
CEO, Intel

So I'm super excited to give you this award, but, you know, you may not know here, but early in, or much, much earlier in my career, right, you know, I worked on video conferencing, and we are working on key codecs and compression and H.264. So I really empathize with the problem that you're solving and all the challenges, you know, that it is. And so it's really important to our industry. You know, video is the dominant, you know, use of bits, but really important to me as well. So with that, it's my pleasure to give you this year's award, Ignite Winner of the Year. Thank you.

Speaker 25

Thank you, Pat. Much appreciated.

Pat Gelsinger
CEO, Intel

So can you give us a look at your demo?

Speaker 25

Yeah, cool.

Pat Gelsinger
CEO, Intel

Okay.

Speaker 25

Yeah, I'll show you something cool. Yeah. Now, before we jump into the demo, let's speak a bit about the setup. This laptop here comes with an Intel Core Ultra processor. It has an NPU and is one of the AI PCs we talked about before. Okay? Deep Render will run its five times better AI compression technology on the NPU in an efficient, low power, and very good compression locally on this device. And, Pat, you might not know this, but this is a very significant moment for me. Deep Render spent five years researching and developing-

Pat Gelsinger
CEO, Intel

Mm.

Speaker 25

Better compression technology, and with the Intel Core Ultra, we can now bring this technology to hundreds of millions of devices and users. What does five times better compression technology mean for users? It means five times faster internet for everyone. Who does not like faster internet?

Pat Gelsinger
CEO, Intel

Mm-hmm. So show us.

Speaker 25

Yeah, let's jump in. Here's Deep Render's demo application, and if we click on this button, we can showcase it on a number of videos. Let's go with the people video. Here in this video, we can see two videos playing, both at the exact same file size and bit rate. On the left side, compressed with the Deep Render AI technology, and on the right side, compressed with old school traditional compression.

Pat Gelsinger
CEO, Intel

Well, I didn't realize we looked so bad today. Yeah, that's really impressive results. What do you think? Yeah.

Speaker 25

Thank you, Pat. And, yeah, to show that even clearer, if we pause and look at the difference in quality, we can use the slider and have here a Deep Render compressed frame-

Pat Gelsinger
CEO, Intel

Mm-hmm.

Speaker 25

And now a traditional compressed frame. And, yeah, there's a clear difference in visual quality, highlighting five times better compression performance.

Pat Gelsinger
CEO, Intel

I think you did it.

Speaker 25

Yeah.

Pat Gelsinger
CEO, Intel

This is incredible. And the thing that excites me about this is it's a whole new domain, and you think there's a lot more to be gained from this AI approach to compression as well.

Speaker 25

Oh, yeah. AI-based compression is a breakthrough innovation. It's a paradigm shift away from traditional compression towards an AI-only solution, and we are only beginning.

Pat Gelsinger
CEO, Intel

Well, hey, thank you so much, Craig.

Speaker 25

Thank you.

Pat Gelsinger
CEO, Intel

Good to have you here. Again, congratulations on being this year's award winner.

Speaker 25

Whoo!

Pat Gelsinger
CEO, Intel

But these innovations truly are powered by the magic of Moore's Law. And when we laid out our 5-node and 4-year journey, you know, it was like: Wow, that's a bold, aggressive agenda to go make that happen. Well, we're making it happen. You know, our Intel 7 products with Meteor Lake and Emerald Rapids, done. You know, Intel 4, right, with our Meteor Lake product, done, and successfully ramping today in Oregon, and we're just transferring it to our second manufacturing facility in Ireland. Intel 3, as you saw here, we're demonstrating, you know, with Granite Rapids and Sierra Forest, ready for first products going forward. You know, we showed you the first Intel 20A, the what's next of, you know, the breakthroughs in RibbonFET and the PowerVia.

20A, manufacturing ready for next year on track. But the granddaddy is finishing the race with 18A, you know, the fifth node on our roadmap, and this is, you know, now we're almost finished with the 0.9 PDK. Key, key milestone when we finalize the design rules and unleash this for our internal designs, and you heard me talk about the first two designs of Panther Lake and Clearwater Forest for our server and clients in 25. And when you look at the resulting, diagrams, and I've been looking at transistor SEM diagrams, scanning electron microscope diagrams, you know, for about 40 years, and 18A and the RibbonFET transistor, it's a Picasso. It's a work of art.

It is elegant, it is beautiful, and, you know, if you're not a geeky kind of guy like me, trust me, this is like, man, the science behind this is just incredible. And this is one of our test chips. And as I said, we'll be bringing our first product wafers into fab very shortly, and customers like Ericsson are utilizing that. Our work with Arm is progressing very well, that we announced back in May of this year, and we'll be sending our first Intel product designs into fab, you know, in the first quarter of the year. But we're not stopping there. We're, you know, continuing on Intel Next, where we're making further improvements on Gate All Around, the next generation. And while we're well ahead on PowerVia, we're already well underway on the next version of PowerVia as well.

We're already engaging in the next generation of lithography, and before the end of this year, we'll have the first High-NA machine, and that's the generation behind EUV, and we'll be docking that machine in Oregon. So Anne and my Christmas present is the first High-NA machines coming to Oregon this year. Happy holidays to all of us because we are powering the what's next of innovation, and we're working hard to de-risk it. And clearly, this is hard work. This is invention, right, you know, for it. And our first test chips, and we're doing modular design work. Part of the reason we're so far ahead on PowerVia compared to the industry is we de-risk that by running that on a FinFET node with PowerVia to show that we could deliver PowerVia and Gate All Around in 20A and 18A.

All of this is stay tuned. We are so excited about the work, the momentum and the progress that we're making. But, you know, the wafers are cool, but this packaging stuff, wow, this is getting incredibly exciting as well. And 25 years ago, Intel drove the creation of organic packages. And when you see any of these chips, you know, this is what we're talking about, this organic package layer. But that's all changing now, right? Yesterday, we announced that the next generation, and just like we did 25 years ago, is now moving to glass substrates. And this is a glass panel. This gets sliced up for those substrates. You know, here's an example of a glass wafer, you know, that we've now met. Isn't that cool looking? Yeah, right. You can see through it.

So just incredible breakthroughs in the next generation of packaging. These packaging improves, you know, density, it improves power, it's more thermally favorable, glass to be the substrate of the future. We're excited to be leading the industry in bringing this to the market as well. It isn't creating prototypes, it's creating volume, baby. With that, you know, Intel is playing this critical role in rebalancing the supply chain and making extraordinary investments in capital and working with understanding market conditions, our customers, the CHIPS Program Office funding. We expect to invest over $100 billion in capital over the next five years between our Oregon, Arizona, New Mexico and Ohio facility. We now have our proposals in the CHIPS Program Office hands as we're committed to produce the world's most advanced chips in the U.S. at scale.

But we're also working with the industry to enabling this next generation of the industry's evolution and the chiplet generation. Intel led the PCIe disruption in the 1990s as we went from racks to system with standard interconnects. Well, we're working with the ecosystem for the next generation, and we launched from this stage, right, last year, the UCIe consortium, and we joined forces with the ecosystem to create this open chiplet ecosystem. And with that effort now, we have over 120 members now, and you see a number of demonstrations on the show floor creating this open ecosystem and enabling chipsets designs. And while we're still very early in this progression, this is the who's who of silicon participating.

I'm happy to show you the first test chips from this work and our foundry systems, you know, partnering with the UCIe IP that we're working with, Synopsys on, Intel 3, you know, combined with our advanced EMIB packaging and a TSMC, you know, node TSMC Synopsys and Intel Foundry, Pike Creek, the first test chip for UCIe and the beginning of the chiplet era. Are you okay? It isn't just test chips. What we're finding is, as people move into this AI, era, there's extreme interest in advanced packaging and these large systems for large language model and using, you know, them at scale, bringing memory and, you know, packaging capabilities together. You know, we're creating these solutions for, you know, customers in the next generation of the 3D, silicon era.

This is exciting, and this is just an area that we're continuing to push the envelope, you know, for the capabilities that we are going to enable our customers, ecosystem, and industry to provide. Of course, we have many other areas of innovation and research as well, and one of those is neuromorphic computing. Neuromorphic is algorithmic breakthroughs, and the capability that we're enabling with neuromorphic computing, right, is seen-- Next slide. We're seeing that our Intel Labs is showing the neuromorphic capabilities, you know, and being able to solve optimization problems and AI scheduling problems, financial allocations. All of these need a different type of architecture and algorithms, and neuromorphic computing is showing unique capabilities in this area, with dramatic improvements of power and performance over other conventional architectures.

We're finding a lot of momentum, and last year we announced our stackable compact system, Kapoho Point, and we've shared our Intel Neuromorphic Research Community, where we now have 200+ groups participating in it. And this technology, right, is being adopted for accelerating control, planning, and different types of workloads. Intel is working with a wide range of collaborators to enable this computing capability. And, of course, maybe the granddaddy of them all is what happens when we move past digital? And quantum presents the new era of the harnessing of some of the physical effects. We believe this will enable new chemistry problems, financial optimizations, climate change, travel, you know, all of these, and most importantly, maybe security. And it promises to make huge advancements, but probably post-2030, when we see quantum supremacy being able to be achieved.

And our approach is different. Intel's qubits differ from the other approaches in the industry because we are using silicon, and simply put, do it in silicon. We're the only company working on silicon qubits and using the same process and materials that we're already using, tweaking them a little bit, to create leading-edge qubits. And simply put, if we get this working, we can do it at scale. You know, and, you know, the most advanced research chip for research is what we call Tunnel Falls. And Tunnel Falls, we just released it. You know, here's a wafer of Tunnel Falls, a 12-qubit device on a 300-millimeter wafer, and while Arrow Lake is hot out of fab, this one is rather cold.

We need to operate it at below 1 Kelvin, and for that, we're taking our most advanced EUV technology, our most advanced CMOS fabrication lines, and figuring out how to run them at cryogenic temperatures. You know, each of these wafers is providing 24,000 quantum dot devices, you know, and it's a small chip, 50 by 50 millimeter square, and over 1 million times smaller than the alternate approaches. And our focus is gaining insights from Tunnel Falls and enabling the research labs and universities with the most advanced capabilities, and not just in the hardware, but also releasing the quantum SDK, a software stack that we're able to operate and learn how to, with research universities, the silicon qubit approach, and being able to create the programmability, performance, and scalability for enabling quantum computing in the future.

Last year, we had the first-ever Intel Lifetime Achievement Award, and we gave that award to Linus Torvalds. And I was so thrilled to have Linus here, you know, to give him that recognition, and we got such good response from that last year, we said, "Let's do it again this year." And with that, I'm excited today to announce the 2023 recipient of the Intel Lifetime Achievement Award for the most worthy technical achievements that they've been, you know, bringing forward. And with that, join me in welcoming Fei-Fei Li, recognizing her extensive achievements in the field of AI.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

Thank you so much.

Pat Gelsinger
CEO, Intel

You know, and Fei-Fei is, you know, one of those individuals who's been dedicated to STEM work, dedicated to the responsible use of AI, you know, dedicated to the core advancements of AI in the field, you know, for so many, and how to make it truly AI for all. Because of this fearless pursuit, she's been recognized by many across the industry, and this, among so much more, Fei-Fei Li, the most deserving recipient of this year's Lifetime Achievement Award.

Fei-Fei Li
Sequoia Professor of Computer Science, Stanford University

Thank you. Thank you. Thank you.

Pat Gelsinger
CEO, Intel

So in just a moment, we have a great industry luminary session coming up, where you're going to hear a lot more from, you know, Fei-Fei, and Lama Nachman, our Fellow in the area of AI and labs, and responsible AI, will be interviewing her. So, Fei-Fei, why don't you just take a seat over there, and very quickly, we'll get into that. But, you know, fundamentally, you know, what a great time we've had today. You know, and this just summarizes the things that we've talked about. You know, the focus of Intel of bringing AI everywhere, making it truly accessible to all at volume, from client, edge, network to cloud. Delivering the largest systems, but fundamentally making it capable for all. And with that, it truly is a thrill to be with you again, to enable you, the power of developers.

And now, Lama, Fei-Fei Li, this is a great innovation. Thank you all so very much.

Operator

Thank you for standing by, and welcome to Intel Innovation Investor Q&A session. At this time, all participants are in listen-only mode. After the speaker's presentation, there will be a question-and-answer session. To ask a question during this session, you'll need to press star one one on your telephone. To remove yourself from the queue, simply press star one one again. As a reminder, today's program is being recorded. And now I'd like to introduce your host for today's program, Mr. John Pitzer, Corporate Vice President, Investor Relations. Please go ahead, sir.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Yeah. Thanks, Jonathan. I'd like to welcome everyone live in the room, and also everyone joining us virtually on the web. We've got about 90 minutes to broken up between three different sections of 30 minutes each with Pat Gelsinger, our Chief Executive Officer, Greg Lavender, Senior Vice President and Chief Technology Officer, and then David Zinsner, our Executive Vice President and Chief Financial Officer. This is your opportunity to ask our executives questions. Before we begin, just to stay compliant, please note that today's discussion may contain forward-looking statements that are subject to various risks and uncertainties and may reference non-GAAP financial measures.

Please refer to Intel's most recent earnings release and annual report in Form 10-K, and other filings with the SEC for more information on the risk factors that could cause actual results to differ materially, and additional information on our non-GAAP financial measures, including reconciliations where appropriate, to the corresponding GAAP financial measures. With that, Pat, you just wrapped up a 90-minute keynote this morning. I'm sure everyone in the room was there live, but I know that it might make some sense to give some open remarks on kind of the key takeaways from your keynote this morning before we start with Q&A.

Pat Gelsinger
CEO, Intel

Sure, sure. This, somewhat summarizes on the slide. I'll start in the middle, right? Obviously, you know, as we talked about, the AI PC generation, you know, as we, you know, are kicking off and, Meteor Lake, now the, Core Ultra, you know, we think really ushers in AI as a major new use case for, the PC. And, that, you know, to us, is a big deal, right? Like I said, a Centrino, like, moment. You know, hopefully, through the keynote, you know, how we're infusing AI capabilities across our platforms, as well, came through very clearly on our server platforms, high-end training, client, but also the edge, and, what we're doing to bring it to our edge platforms as well.

You know, as I said, over the last decade and a half, two decades, you know, it's been all about cloud native application development. You know, I think the next era is edge-native application development. Speaking of app development, we announced the GA of DevCloud today, this easy on-ramp to Intel technologies. You know, one of the things that's always been, you know, somewhat frustrating to me is, you know, we come up with a major new hardware innovation, and then we have to get it through the cloud vendor, through the OEM, and into the market, into the hands of, you know, ISVs, et cetera. You know, how do we just circumvent that in a much more aggressive way? That's the Intel DevCloud.

For that, you know, that will have the latest and greatest of Intel, like Gaudi 2s, you know, Xeon Maxes, you know, Meteor Lakes, Lunar Lakes. We'll get them on there as soon as they are getting to the beta stage, but, you know, and then also being able to enable them for scale. So large language model training today, boom! I can do it on my PC, you know, go to the DevCloud and be up and running, you know, on those platforms, very, you know, quickly. You know, that half of the page, you know, what we're doing for Xeons and Gaudis and DevCloud and developers. The other half of the page, very much about enabling the advancements, you know, in the future, and the hard, piece of it. Five nodes in four years, right?

Obviously, a Meteor Lake, you know, is now, you know, checked on Intel 4. So we say 2 of the 5 are done. We showed you, you know, updates on Granite and Sierra Forest. Sierra Forest is 288 cores, which is sort of like... You know, as I said, when I did Nehalem at four cores, it was sort of like, "Wow!" Right now, it's sort of like 288 cores, it's a bit mind-blowing, you know, the system density that that enables. But, you know, we showed the first Intel 20A with Arrow Lake wafers, just out of fab and powering on healthy. We showed you the first Lunar Lake and 18A soon coming to fruition with the 0.9 PDK.

But then also ushering in what, what I like to call the chiplet era, right? Where, you know, racks became systems, systems became advanced package chips. And with that, you know, the UCIe, you know, the traction that it's gaining, chiplets, we showed the first, UCIe, and you can go to the, the showcase floor and see our first test chip, on UCIe. But this idea of three-dimensional silicon, and, really bringing that together, and particularly we're finding huge attraction, you know, to those, technologies for the most advanced AI chips. You know, and that's our own chips, you know, but increasingly, all the, you know, those in the, industry taking advantage of 3D silicon, construction.

For that, also rolled out the next generation past organic packages with glass, which, you know, glass and silicon have much better thermal coefficients. You know, and if they have better thermal coefficients and bonding capabilities, you're gonna be able to create much denser packaging, right? And the ability to build optics directly into the package is, like, way cool, right? And to me, this just, you know, it's gonna bring the picojoule per bit capabilities and the maximum, you know, terabits into a package, you know. And, again, for areas like AI PC and AI for large systems, I think this is gonna be quite differentiating. And we're seeing tremendous interest for our packaging technologies, you know, from foundry customers.

Of course, you know, would it be a developer conference with some geeky things to the future and showing off our quantum program and, you know, some of the areas, specifically, you know, of silicon in, you know, being able to use silicon at cryogenic temperatures and be able to generate, you know, silicon qubits at scale. You know, I really think, you know, there's a variety of quantum programs in the industry. I think ours is the only one that will be scalable, manufacturable, and that will result in quantum supremacy. Of course, other things, you know, we're making progress on our foundry, you know, our first 18A prepay, you know, Arizona, you know, we've just accelerated the build-out. You know, we love Arizona and building quickly.

We have all of our proposals into the CHIPS program office. You know, we just recently from a capital capability, you know, just did the IMS partial sale, our mask operation, and TSMC is an investor, and Bain, you know, partnering with us, so super excited about that. The Arm relationship progressing nicely. We participated in the Arm IPO. And the next picture was, if you go to the next slide, you know, just the huge footprint of the manufacturing projects. And for that, you know, the four expansions in the U.S., Oregon, Arizona, and New Mexico, and Ohio, you know, will be having the inauguration of Ireland in the very near future.

You know, our Intel—our second Intel 4 facility that we'll have, and then, of course, the Germany and Poland projects, which we hope... You know, it was a nail-biter vote at the EU Commission. You might have seen that, you know, about a month ago, it was 587 to 10. Right? Yeah. You know, it was really close. They weren't quite sure, so we were, you know, huge support by the EU Commission on those projects. You know, and with that, bringing the only geographically diverse, leading-edge manufacturing capability, and what we believe is the most important area of humanity and economics going forward, silicon. As everything is going digital, you know, we will be that provider of, you know, silicon at scale, with leading-edge capabilities, right? In a geographically diverse and capable way.

With that-

John Pitzer
Corporate Vice President and Investor Relations, Intel

I think there might be one more slide. Hard to get a sense virtually of the-

Pat Gelsinger
CEO, Intel

Okay.

John Pitzer
Corporate Vice President and Investor Relations, Intel

The scale at which we do things, but this was a, I think, a good picture of what we're doing down in Arizona.

Pat Gelsinger
CEO, Intel

Yeah, and it was a year and a half ago, where we did the groundbreaking on this facility, and look at this today. You know, these have the largest trusses that have ever been installed on any building, ever. You know, the largest cranes that have been used to lift those 60-ton trusses into place. You know, each fab is over 6 football fields large of open span facilities. You know, these things are just awesome, right? So... And I know some of you have come and visited them, but to just think about the massiveness of these engineering projects, of building the buildings, to build the smallest things that have ever been built. You know, biggest buildings ever done for the smallest things that have ever been built. It's really pretty fabulous.

You know, if you're not a little bit, you know, I'll say, inspired about that magic, right? Then you probably shouldn't be in the semiconductor industry.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Good, great summary, Pat. Logistically, we're gonna try to toggle between questions in the room and questions on the web. We're gonna start in the room. I think we've got a mic going around, so if you could just raise your hand. Please, just, state your name and your affiliation when you get the mic. So why don't we start with Ben up front?

Speaker 26

Hi. Hey, hey, Pat. It's [Ben Dyson] with Bernstein Research. Great presentation today. I have two questions. Could you provide a little more color on the relationship with TSMC? They just invested with you, you know, 10% of IMS. You do some work with them, you know, for some of your products. They're-- you, you're actually, you know, increasing your packaging initiatives as well, and we know that they have some issues there that might need solving. Just putting a bow around that, what's the-- how should we be thinking about it as investors, as it stands today?

Pat Gelsinger
CEO, Intel

Yeah, and when you think about the TSMC-Intel relationship, you have one of the most complex relationships that you could imagine, right? I'm a customer of theirs, right? You know, and, when we show off, Meteor Lake, okay, we have multiple pieces of TSMC silicon that goes into that. So, you know, we're a customer. You know, we're also a collaborator, and you saw that at work today. You know, the UCIe initiative, the test chip is using, you know, TSMC silicon, and Intel silicon with, Synopsys, doing the bridging of the UCIe IP. So we're, you know, we're collaborators, you know, with them. You know, they're also a customer of mine, right? And, when we did the IMS initiative, why did they invest in that?

Well, they're one of the largest customers of the mask-making tool as well, so I'm a supplier to them, and, of course, we're gonna compete for some, you know, foundry business as well. So, you know, there's four dimensions to the TSMC, you know, relationship in that regard, and it's a super important relationship for both companies. You know, I chat regularly with C.C. Wei, you know, Mark Liu, you know, board members. Every time I'm in Asia, I meet with them, vice versa. It's a complex relationship, but I think both of us see that, you know, if we work together well, that's the right thing for the industry and for our mutual customers. And as these... You know, as we move to the chiplet era-...

We must work well together because more and more of the solutions that our customers will want is a collaboration of our technologies coming together. Sometimes it might be their packages, sometimes it might be mine. You know, sometimes it'll be their wafer, sometimes mine, but most of the time it will be both, right? As customers will be taking advantage of that. So super important relationship and, you know, one I spend a lot of personal time on, and, you know, we're doing pretty well.

Operator

Do you have a quick follow-up?

Pat Gelsinger
CEO, Intel

Yeah. Thanks, John. So it's a really neat example, changing gears completely to PCs. You've talked about the Centrino moment, and you had a really neat demo with Rewind AI, where, you know, they record basically everything they see and do on your PC, and then you can ask it questions, and it just-- you know, that was an aha moment, like, why you need to have some local processing capabilities for privacy, et cetera. Do you just mind, kind of, like, are there more apps? Do you-- like, what other apps are you excited about that drive this upgrade cycle, and how big do you think it will be in PCs? Mm-hmm. Yeah, and, you know, if you...

Let's remind ourselves of the Centrino moment, if you would, because the Centrino moment only took about 2.5 years to materialize in the market, right? You know, from 2003, when we launched Centrino, until, you know, laptop sales, you know. So I think of this as the beginning of that moment, right? With the AI PC, and it is gonna take a while for applications to emerge. You know, you know, as I indicated in the keynote, we have a Copilot on this journey, you know, with, Microsoft, and you'll see announcements from them in the very near future. So the software infrastructure, you know, needs to come to play. Many of these developer tools are just coming together. You know, the Rewind demonstration, you know, literally, it got working about 5 days ago, right?

You know, so this is very, you know, very much these things are just starting to come together. I think this is a killer app. I don't think it's the only killer app in that regard. You know, Adobe is just bringing, you know, their capabilities as another ISV in the creator space. Obviously, Zoom and Teams, you know, how they start to do this for, you know, real-time. And, you know, I like the, you know, the video demo, real-time transcription, summarization, and translation, right? You know, for, you know, hearing aid applications. So literally, I'm expecting the future. I'm sitting in a meeting, you know, you're speaking Japanese, and I'm hearing in English in real time, in addition to translation and summarization services. To me, that's the collaboration, you know, of the future.

You know, the Tower of Babel comes to an end with the AI PC, right? You know, that we're able to, you know, bring. So I think that'll be a whole another area. So I see creators, you know, I see, right, this collaboration, personal productivity, as another domain, but it really will be, what's the killer app? Hey, it may not have been invented yet, right? And that's part of what the volume deployment, you know, the Darwinian PC, as Andy Grove would call it, you know, when you start shipping these things in hundreds of millions of units, creative things happen, right? And I, and I think that's what, you know, I think of when we think of the Centrino moment for the AI PC, that we're unleashing creative energies.

I think Rewind is a great example of one of those killer apps. You know, to me, this is so powerful, right? And within minutes, you know, I have it running on, you know, one of my PCs now. It's sort of like, okay, this is, like, freaking cool, right? You know, it just is.

Operator

Perfect, Ben. Thank you. Jonathan, can we take our first question on the web?

Certainly. First question from the web comes from the line, just one moment, Timothy Arcuri from UBS. Your question, please.

Timothy Arcuri
Managing Director, Equity Research, UBS

Thanks a lot. Pat, I had two as well. So, my first question is on glass. You've always been really far ahead in terms of packaging, and you've been working on this for a long time. And it's gonna, you know, allow you to put a, you know, larger die and, put, you know, more die on a single substrate. So, you know, we all focus on the front end, but can you just double-click on this and maybe talk about how far ahead you are in terms of packaging and maybe what this is gonna enable you to do, and, you know, when we can see products in the marketplace that will have glass substrates?

Pat Gelsinger
CEO, Intel

Good. So, you know, glass is... You know, we've been working on this area for a solid decade-plus.

You know, so it's been a component research, and we've been working on it for a number of years. There's a lot of work to enable a new substrate and packaging technology, new equipment. You know, we brought you out one of those panels on stage. The production panels were probably even bigger than that, you know, when we go to production versions of this, so it's new equipment to handle those, right? You know, it's also the science then of how we're gonna carve them up, put them into packages. You know, there's just a ton of industry-enabling work that needs to occur. You know, we mentioned this idea of optics directly in the package. You know, we've, you know, we have some breakthroughs in the area of literally building waveguides directly into the glass package .

You know, you have better thermal conductivity, so you're able to get much higher bump densities, as well as you know, put these you know, with lower and lower pitches. You know, so you're able to get higher bandwidth throughputs between substrate and silicon die. So this real idea of the you know, 3D silicon, you know, we think a key piece of that. You know, it has somewhat better thermal characteristics, so we'll get better conduction. So all of those things make this a pretty compelling technology if we get it to work. You know, so in this area, we think we're several years ahead of others.

You know, there are a couple of other startup companies in this area, but we do think it's an area of, you know, significant advantage for us, and one that we'll be bringing to production volumes in the second half of this decade.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Tim, do you have a quick follow-up?

Timothy Arcuri
Managing Director, Equity Research, UBS

I do. I do, John. Thanks. Pat, my second question is on 18A. I think you used the words almost finished, and you used the words finalizing design rules.

So can you just talk about this in the context of getting external customers to make a big foundry commitment? I know you've talked about a customer yet to be announced that's, you know, already made a commitment. But is the process finalized enough right now for any customer, you know, who would be considering the process to have enough information to make a big commitment? And as, you know, part of that, do you think that a customer would make a big commitment if it's still part of one organization? Thanks.

Pat Gelsinger
CEO, Intel

Yeah. So, you know, sort of teasing that apart a little bit, there's a lot in that question. You know, at the 0.9 PDK, which is what we said we're almost finished with, at that point, part of the 0.9 PDK says that you've met, you know, quality, reliability, performance metrics, you know, in the industry, and you've adequately solved all of the yield issues so that you get to stabilize the design rules. Because ultimately, you know, what people designing care about is, you know, what's the metal, you know, pitches, you know, how much, you know, where the transistors lie, how can I design with this?

And that's bundled up in the PDK characteristics, you know, the essentially, the spice models, if you would, of the transistors, but then the design rules of how you can actually lay out and use it. That's all summarized into the 0.9 PDK. And as we're getting close to releasing that, that's sort of the starting gun for people to be able to design. And we've been giving, you know, a number of the test chips, and I held up the 18A wafer on stage. You know, we've been giving the preliminary versions of that to customers, and, you know, like we mentioned, Arm is very excited about the results that they're getting since we announced that partnership in, I think, it was May of this year when we announced that.

So all of that's going very well, but really, the starting gun happens when we release the PDK. As I said, we're getting very close to doing that, and that's the point where I can say we're done with the invention of 18A, now it's just the productization of that. So we're getting very close. Key milestone for us and the industry. And that really is the point then, that customers will start making commitments against. Now, as I also said in the keynote today, our first major designs are completing.

So we've been doing the preliminary design, just like the external foundry customers, and we'll be sending, you know, our first two major designs, Clearwater Forest and Panther Lake, you know, the ones that we spoke about in the keynote today, a major server design and a major client design in the first part of next year. You know, we'll be sending those into fab. And the other proof point I gave was 20A, which essentially 20A is a preliminary version of 18A, right? Very similar RibbonFET and PowerVia. You know, we're now, you know, have Arrow Lake out of fab, showed the wafer, you know, today, powered on, looking very healthy as well. So we're giving, you know, good, solid proof points against that.

We do expect to get, you know, whale customers, as we've called them, you know, for a foundry. You know, those are well underway. You know, as I mentioned a few weeks ago, at the Deutsche Bank conference, that we now have a prepay, right? As I say, you know, if you're willing to put cash on my balance sheet to accelerate our factory build-out and secure supply chains, that's a meaningful commitment. So we're very, you know, excited about that. But we do hope to make other announcements of customers that are committing, you know, to, 18A, and, as I say, very, very, very good progress and really satisfied. And as I said, the transistor itself, it's a work of art.

You know, I am really excited about this technology, and I think customers will find the power, performance area, you know, and what they're able to do with it, quite compelling.

Operator

Thanks, Tim. Let's take the next question in the room.

Brian Hopkins
VP, Emerging Technology, Principal Analyst, Forrester Research

Hi, Brian. Brian Hopkins, Forrester Research.

Pat Gelsinger
CEO, Intel

Hi. Hi.

Brian Hopkins
VP, Emerging Technology, Principal Analyst, Forrester Research

In looking at your presentation today, it really seems like that Intel has changed its strategy from at least a perception a few years ago, of being very insular, very CISC-focused, very much this is our IP, because you guys were the giant in the industry for so many years. Now we see TSMC partnerships, we see the Arm rollout partnerships, we see chiplets.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Brian Hopkins
VP, Emerging Technology, Principal Analyst, Forrester Research

Can you talk a little bit more about kind of the evolution in thinking and how Intel wins in this kind of multiprocessor, multi-architecture world we're moving in?

Pat Gelsinger
CEO, Intel

Yeah. There's aspects to our strategy, you know, that I would argue haven't changed. You know, who created PCIe? We did. You know, who created USB? We did, right? You know, so we've been driving this industry standards aspect, you know, for many, many years, open source technologies in the software domain. You know, so you know, that was always sort of deep in our ethos, you know, but we were also very biased on, you know, two things. It's x86, you know, what's the question? And, right, our factories were for our products, right? So I'll say fundamentally, you've seen those two fundamental things change, right? This idea of a deep view of an open, trusted choice ecosystem, always part of the Intel culture. But now we realize that, you know, some degree, hey, you know, would I like an x86-only world?

Well, of course I would, because I have a unique position on that technology. But that isn't the case anymore, right? And we're gonna be embracing of a multi-architecture, you know, future, and with that, that means GPUs, that means support for Arm, you know, really being much more open. And as I said, you know, we're, you know, rebuilding the company, and in that, we're really creating two companies inside of one. You know, a proper foundry, you know, that's servicing our internal and our external customers. And, you know, for that, you know, even though it's one Intel, right, you know, we're creating clean, simple demarcation, you know, that's accountable with firewalls for external customers.

You know, addressing a little bit more of Tim's question as well, you know, in that, you know, that they can say, "Okay, you know, my designs are protected and secure, but I also get to work with one Intel." Maybe I'm designing some things around the foundry, and I'm using some of Intel's chiplets as I compose my solution for the future. So there's clear benefit, you know, to both aspects of this, going forward. You know, and if that one diagram that I showed of the, you know, the 3D AI chip, you know, and if you look at that picture, you're going to say, "Wow, that makes a lot of sense!" Right? You're going to have base die, you're going to have compute die, you're going to have IO die, you're going to have EMIB bridging, right?

You're going to have advanced packaging, you're going to have different memory technologies that are composed into an advanced solution like that. Am I going to build all of that silicon? Absolutely not. Am I going to build a lot of it? Hey, if we execute our vision, yeah, I'm going to build a lot of it, but doing things like UCIe, I'm enabling an industry, right, you know, to take advantage of this continued compression of design that Moore's Law and now 3D silicon enables. So to me, those are the two fundamental things that are changing, right? The move to a multi-architecture world. Hey, I'm going to be the king of x86 forever and ever, but the x86 will not be king of all markets.

Right, we are opening our foundry and fab capabilities, you know, to enable a unique supply chain of technologies into the industry, and that's the Intel of the future.

Operator

Thanks for the question, Brian. Let's stay in the room. Toshi, I think you might have had a question. Mic's right behind you.

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

Hi, Pat. It's Toshiya Hari from Goldman Sachs. Thank you so much for hosting this session. I guess I wanted to ask about your accelerator strategy. You've talked extensively about Gaudi, which is gaining a lot of customer traction. If you can sort of double-click on that and talk about the areas in which you're seeing traction, I think the MLPerf results were really good.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

Is it more training? Is it inference? Is it both? And then maybe Falcon Shores, your early thoughts on that going forward.

Pat Gelsinger
CEO, Intel

Yeah. Yeah, so coming back and, you know, I laid it out in the keynote, but Gaudi 2 shipping in volume today. You know, Gaudi 3, we're just getting first wafers out, right, and we're now, you know, in the package assembly phase of Gaudi 3, but that'll be a 2024 volume product. And then Falcon Shores is the 2025 volume product. And Falcon Shores brings, you know, two things together as sort of the continuation of the Gaudi architecture, but it also brings in some of the, you know, max capabilities that are more programmable GPU. So I'll say it's GPU, right, and accelerator, right, capable. So it's a more programmable version as we bring both our HPC and our AI capabilities together in 2025 of Falcon Shores.

So the roadmap, super simple: 2023 is Gaudi 2, 2024, Gaudi 3, and 2025 is Falcon Shores. You know, use cases for it are - I just call it AI use cases. And with that, you know, a lot of it is inference, right? As people move past the model creation, but there's also a lot of training going on. So I'd say it's clearly, you know, it's the data center solutions, and I sort of view that, you know, falling into, you know, sort of three different categories: the big high-end training machines, you know, the large volume inferencing machines, and then the enterprise AI deployments, which could be a combination of AI, mostly retraining, right? You know, taking foundational models and customizing them on local, right, data or the inferencing or use of that in it.

Now, if you think about that third bucket, there's a lot of Xeon in that. You know, and that's where a number of cases, it's just the easiest to add a little bit of AI to my already running, you know, Xeon application, and then the AMX capabilities make that super great, right? You know, sort of like, you know, I'm able to accelerate the portion of the workload. You know, do I go rewrite the whole application to move it to, you know, a, right, a GPU architecture? Heavens, no. You know, that is hard work, you know, of decades of software. Or do I add, you know, a set of libraries and, you know, specifically for different, AI portions? You know, and that's what will happen, right? You know, people will add, and it sort of...

You know, as I like to follow, you know, it's the law of momentum, right? Software developers will continue developing. You know, they don't do hard work if they don't have to. And Amdahl's law, right? If the AI portion is a small portion of the workload, then AMX on a Xeon is a perfect answer. Accelerate the small portion, you know, because most of it won't get advantage because, right, it isn't benefiting from any AI acceleration. So that's sort of how we see the different application domains. And then the other thing, of course, that we covered today is getting AI to the client and to the edge, which is a different market. But in the accelerator market, you know, we have two strong offerings, a very differentiated Xeon product line with AI enhancements, you know, and we're a lot better than the alternatives.

Not a little bit better, a lot better, and with Emerald Rapids, you know, coming later this year, you know, that gets to be even a bigger advantage over anybody in the industry. And now we're the only company really showing up with competitive high-end training and inference scores, you know, versus the market leader.

Operator

Does she have a quick follow-up?

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

Yeah. Thanks, thanks, John. Just on the tech roadmap, obviously, you're doing 5 nodes in 4 years, which is incredible. You've got Next and Next Plus on the roadmap that you showed today.

Pat Gelsinger
CEO, Intel

Yep.

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

Should we expect you guys to go back to sort of a tick-tock, sort of pre-five nodes in four years cadence, or do you just kind of stay accelerated, if you will?

Pat Gelsinger
CEO, Intel

Well, you know, what I'll say is, you know, we wanted to get to a certain level of completion on five nodes in four years before we quite lay out, you know, the specific cadence and timing and capabilities of Next and Next, next. So I'm looking forward to a meeting that we might have sometime in 2024, when I give you a lot more clarity, on that question. But, you know, we felt like laying out a lot of thoughts on Next, the Next Plus, you know, before we finish this audacious five nodes in four years. You know, it was just appropriate for us to get a little bit further along on finishing what we laid out, so far. So, you know, we'll, we'll have a great conversation on exactly that, question next year.

John Pitzer
Corporate Vice President and Investor Relations, Intel

... Thanks, Toshiya. Unfortunately, we don't have a digital twin of Pat yet. I think that's Innovation 2024, and he's got a pretty tight schedule, so we've run out of time for this session, but I want to thank Pat.

Pat Gelsinger
CEO, Intel

Good. Can we do one last one? I, I wanna do one. I like these folks.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Okay, good. Dwight Blazin.

Pat Gelsinger
CEO, Intel

Oh, okay, Dwight. Okay, buddy, how you doing? So it better be a good question since I made you stand up.

Dwight Blazin
Research Analyst, Davis Advisors

I can guarantee that. Dwight, Dwight Blazin and Davis Selected Advisors. You know, it's Intel's vision, especially within and around the intelligent edge, is very PC-centric, naturally.

Pat Gelsinger
CEO, Intel

Yep.

Dwight Blazin
Research Analyst, Davis Advisors

In most of the world, the edge of the network is largely gonna not be a PC. It's gonna be a handset. It's gonna be a handset that is running on a rival architecture Arm.

Pat Gelsinger
CEO, Intel

Mm-hmm.

Dwight Blazin
Research Analyst, Davis Advisors

Can you talk a little bit about how your ability to be a major player in the future of AI might be constrained simply by the fact that your primary vehicle to be a participant in the edge is potentially, you know, not dominating in terms of the terms at the edge?

Pat Gelsinger
CEO, Intel

Okay, so you know, first thing, I love this question because it allows me to give you three different answers, right? And I like all three of my answers. The first one is, you know, you know, you know, I'll say x86 at the edge, right? You know, I mean, next time you're at, McDonald's or Chipotle, et cetera, jump over the counter and look what's under the counter. Right? Now, you might get incarcerated, so, right? But right, you know, hey, these are all, you know, you know, I mean, you know, it's, retailers, it's, right, you know, right, food, it's, supply chains, it's equipment, manufacturing, et cetera. You know, that's the intelligent edge. Now, a lot of it will be Arm-based. Hmm. You know, what did I do today? I announced OpenVINO with Arm, right?

So we're clearly embracing a multi-architecture edge, right? And my silicon and proof points associated with it, hey, we've got to show our architecture is better, but, you know, clearly, we're demonstrating that we're embracing a multi-architecture future of CPUs, GPUs, accelerators, and third-party architectures as well. Right, you know, the relationship that we announced earlier in the year with Arm. Hmm. Maybe I'm a foundry for many of those use cases, where I'm not the architecture, so maybe I don't get the product IP margin, but maybe I get the foundry and packaging IP margin. You know, let's make one more leap from that. This is my third point to make on this, which I really like as well. You know, who is gonna be the foundry for all of the other big cloud AI accelerators?

Well, I think I got a decent shot of capturing a lot of that as well. You know, I mean, Google is doing the TPU, right? Are they ever gonna replace that with a Gaudi, right? You know, Amazon is doing Inferentia, right, and Trainium, right? Are they gonna replace that with a Gaudi? Maybe, probably not. Can I capture the margin associated with the packaging and the foundry of those? Hmm, I think I got a decent shot of winning that as well. So we have two bites of the apple here, right, you know, in terms of, hey, we're gonna make our products great. We're putting AI into every one of them. We're gonna push them, you know, the client, the edge, the PC, right, you know, for it.

We're gonna compete for the high end of inference and training at scale with Xeon and Gaudi and, you know, Falcon Shores as we build it out, and I believe I'm highly differentiated to be the foundry of scale. And the on-ramp we're finding with many of the foundry customers today is leveraging those packaging technologies as the fast ramp to starting to use Intel Foundry Services, differentiated technology, right? Two bites of the apple of the AI margin pool, which we think will be very significant over the course of the decade. So I love your question, Dwight. Thank you very much, and thank you all.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Thanks, Pat. Make a quick transition to Greg Lavender. Greg, do you want to join me up front?

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Sure. Thanks, John.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Appreciate it, Greg. Greg joined Intel when Pat rejoined. Prior to being at Intel, he was the Chief Technical Officer at VMware. Pat likes to say he has more software engineers today at Intel than he did when he was CEO of VMware. Greg's actually giving his keynote speech tomorrow, but we thought that we'd give him a few minutes to make some prepared comments before we break out into Q&A. With that, Greg, I'll turn it over to you and your slides.

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Well, thanks for first off, for letting me out of my Faraday cage in the engineering bowels of the company to come and talk to you about all the cool stuff we're doing. So, yeah, I just wanted to sort of just share a couple slides. I've sort of given this talk, I think, about a year ago or so with an investor day conference we had. And, you know, people always ask me, like, "Okay, well, Intel's tried to do software before, and they, they didn't succeed." But we got 19,000 software engineers doing software every day. Most of it you don't see. It's all in the foundational layers of the products, right? In firmware, BIOS, you know, memory reference code, which is what we use to make sure DDR5 and DDR6 work at high speed.

You know, we got to deliver extreme quality with that software and the hardware. In fact, I have all the power management software that gets all the power distribution controls. That's how we're able to get, like Pat said, with Emerald Rapids, you know, significant power performance per watt improvements. You know, that's a combination of silicon, process technology, and then the software that we use to manage the power budget as we distribute that across the die and across the SoC. But, it's really about, you know, moving up the stack and... 'cause that's where all the software monetization is happening. We talk a lot about open source. We're major contributors into the open-source ecosystem. Developer productivity is incredible, right? Using things like PyTorch or TensorFlow or now JAX and XLA from Google.

We're contributing into all these ecosystems ourselves with our software engineers, but we're starting to put together, you know, software as a service, and Intel Developer Cloud is the proving ground for where we deploy that first on all of our latest hardware, and our latest silicon, even before that silicon is, you know, in the OEM or ODM channel or in the CSPs, right? I could put it in Intel Developer Cloud first to get the developers on it, testing it, trying it out, benchmarking, and a lot of them are startups. And if you were at Pat's keynote, you saw three of those startups. They all did that work. All their work was done in Intel Developer Cloud, right? As they were building out their, their capabilities, you know, training at scale and doing what they were doing with their AI models, et cetera.

Then deploying, as you saw, on our new Meteor Lake CPUs. So the software is really there to kind of, you know, like Pat says, software-defined, silicon-enhanced. I would just say software-defined, silicon-accelerated, 'cause we're in the accelerated days of computing here. So whether it's a Gaudi 2, whether it's our Arc GPU, whether it's our Xeon Max CPU with HBM high-bandwidth memory, you know, it's all about getting the operating systems enabled, getting all the compilers enabled, getting all the the whole software ecosystem enabled, and that's what we do in the open source space. But as we actually get up to that higher level, maybe the next slide, of the stack. Don't you guys, is there a clicker for that? I think you want to click to advance the slide. Perfect.

Yeah, sort of, as you get up, it's so I sort of was talking about that stuff at the bottom. I call it the foundational software layer. Right? It touches all the silicon. You know, I call that market enabling. It allows us to, you know, to bring competitive products to market, like I just said. Then there's all this open ecosystem stuff. It's all, again, it's relatively free. You know, you gotta spend some time as a developer using it. I mean, OpenVINO is a good example. And then that differentiates our product portfolio. Like for example, with OpenVINO deployed at the edge, and the client, we, you can deploy it in the cloud. You can even do training with it, not just inference.

But when you get up into the top layer of the stack, and we give Amazon all the credit for basically taking a bunch of open source software, delivering services they did make a fortune on, right? 'Cause they deliver those as, you know, Kubernetes as a service or whatever, Kafka as a service. They got, they got a whole model there. And so, well, that's not a proprietary business model. We can all do that. So what we, well, what we've been announcing before, you've heard of Project Amber. So Project Amber is now GA. That's the official tomorrow. trust is a cert. We call it Intel Trust Authority, and it's the attestation service. The Intel Trust Authority brand is essentially a, a portfolio name.

We have other technologies that will be coming into that portfolio throughout next year, and essentially, it leverages all of the security hardware enforcement we have for trusted execution environments, security enclaves, if you call them that, to basically do what I call as a security for AI. That is, you can take your AI models and your, like, at the edge, they get stolen. You can run them in the trusted execution environment. We have two, two technologies in our current shipping products, like Ice Lake, it's Software Guard Extensions, it's Trust Domain Extensions in Sapphire Rapids, fourth generation, and Emerald Rapids, fifth generation. So, we've now got this deployed. Again, just to pre-announce a little bit, I mean, Google's already announced this, but basically Google's now got their TDX Trusted Execution Environments available in GCP.

They announced that a couple weeks ago at Google Next. They'll be joining me on stage for that. And then we've also got it deployed in Azure Cloud, and again, customer preview, but we've got a pipeline of customers, right? Paying customers, and I will, well, not preview it, but I'll give you, I have two key security industry customers come on stage with me to demonstrate how they're using our Project Amber, i.e., Intel Trust Authority and our TDX technology with their products to enhance the security zero trust capabilities for the industries, and these are major enterprise security companies. So the whole security industry is gonna adopt this technology. The regulated industries like financial services, healthcare, obviously U.S. government, military, DoD, very, very interested in this technology.

And so we see a high growth potential over the next 2-3 years as confidential computing gets adopted and becomes more, more mainstream in the industry. So, and so we have more things we're working on to kind of, pull through that platform value, 'cause we've made a big investment in those bottom 2 layers, right? Which we don't recover from as software revenue, but we now have a business model, support from our CFO, support from our CEO, to go, to go drive that as a business, as we've said before. So that's kind of the, the big picture about what we're... What the strategy is. But make no mistake about it, we don't stop for a minute making our hardware sing. You've heard me say the software is the soul of the machine that brings it to life.

But now we're gonna monetize it ourselves, as opposed to giving away that value to other people to monetize. Great, Greg. That was a great summary. Logistically, again, we'll start with questions in the room and toggle to the web. I think we've went up front. Aaron, if you wanna bring up the mic. Hi, it's Sean O'Loughlin here from Matt Ramsey, from TD Cowen. Question about software and Gaudi and- Yep ... and accelerator software stack. I think you've seen one of your competitors really struggle to get their software into general availability on the main frameworks, such as PyTorch, TensorFlow, and sort of a recent achievement of theirs after years of work. How is that going for you guys? Is it important or are we focused on the wrong things in that front?

No, no. I mean, so PyTorch has emerged. I mean, it was sort of TensorFlow, PyTorch, kind of like two horses in the race with just other things. I mean, Pat had some of them on the screen, MXNet, and but OpenVINO is in there, too. It doesn't get as much publicity, but certainly widely, widely adopted, particularly in edge computing, industrial edge, manufacturing edge, surveillance edge, et cetera. So the way I like to think of it is, so we're actually a major contributor to PyTorch. I'll actually give you some statistics tomorrow, like, in terms of that. We actually just earned it, didn't pay for it.

A seat on the governing board of the PyTorch Foundation, now that it moved from Meta into the Linux Foundation, so we're now sitting on the governing board of that, 'cause we're major contributors. If you deconstruct PyTorch, you look at the bottom layer, it's sort of a plug-in architecture. You have some NVIDIA code plugged in there. You have some AMD, you know, HIP code plugged in there. You've got our oneDNN SYCL. That's the, that's the open standard C++, data parallel C++ SYCL plugin in there, you know, and then there's a Gaudi plugin in there as well. So we can take the PyTorch ecosystem at the and all the value at the top of the stack, and the productivity that comes from programming in Python, and you don't have to get down into the, into the CUDA, into the HIP, into the SYCL.

You don't have to write there. What's really been the big game changer is Triton, which came out of OpenAI. So that's both syntactic extensions in Python, right? Just to give you sort of the ability to express certain matrix multiplication operations and other sort of GPU-accelerated algorithms. They use a technology which is open source, originally created at by Chris Lattner at Google MLIR, which is what OpenXLA uses, what Triton uses. That allows us, and I'll give a demo of it tomorrow, that allows us to essentially map those PyTorch model training or model inferencing semantics onto different hardware. I can run it on my CPUs, which I do on Xeon, so it optimizes for Xeon. It optimizes for our Max GPUs and our Arc GPUs, and it maximizes for Gaudi.

So essentially, at the lowest layer, it's a hardware abstraction layer. We call it a virtual ISA. It's different for a CPU, a GPU, and an accelerator, and so we just use that MLIR technology to essentially map to the right architectural, think of it as the assembly code for the device, to get the maximum performance. And we just published our MLPerf performance benchmarks, which I think shows you the value of that kind of architecture, software architecture, and technology. But, that's the fun part of being a computer scientist, is making that stuff work.

Operator

Sean, do you have a quick follow-up?

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Does that answer the question?

Operator

That's perfect. Thank you. We'll poll in the room before going to the web. Anybody with a question in the room? Jonathan, why don't we try our first question on the web?

Certainly. Just one moment.

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Can we go back a slide, too? That was my last slide, in case, in case it comes up.

Operator

Our first question from the web comes from the line of Christopher Rolland.

Matt Ramsey
Analyst, TD Cowen

Hey, guys. This is Matt Myers, actually, on for Chris. I think for me, a big takeaway here was that Sierra Forest together instead has 288 cores. You previously here talked about 144 cores. Just wanted to dig into what exactly changed here. I know AMD Bergamo has 128 cores and 256 threads. How should we think about you competitively here? And also, what's been the reception and interest for Sierra Forest, and does it cater to other applications now that other processors don't?

Operator

Greg, you want to start, and I can add if-

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Yeah, yeah, I think, so, again, one about the great things about having your own fab and your own packaging facilities is we can actually push the envelope on things. And so Sierra Forest originally started out with 144 cores, and we realized we could just take two of those dies and put them together on some clever packaging, right? Using our Foveros technology to basically kind of bond them together, you know, and then package it all up into an SOC, still meet the thermal and performance and get twice the power. So if you go look at something like Amazon, you know, a lot of what they run there are called... I think about 30%-40% of the workloads in Amazon are called Lambda functions.

They're like little Python scripts or other little things that run to do data management on their S3 object store. And even when I was at Citigroup, we used Lambda services a lot, and you just need lots of cores, lots of cheap cores that just execute that code efficiently. And so having that large core count is extremely useful and, you know, again, to prep for data, I mean, for... You have to prep all your data to go train on it. So you're just doing string matching and string processing and data mapping and tagging and stuff like that. Just having lots of cores, just so you can push through lots of relatively basic computations.

It's not high-performance computing, and you can just get huge amounts of throughput through there for the same power bill that you're paying, you know, for something with fewer cores and, you know, maybe it's more hot and more powerful. Not hot and more powerful, but less performance per watt. So it's really about throughput computing, and we've all heard about that before. It's not about SPEC int computing; it's throughput computing. Just get more work per unit time done through the CPU.

Operator

Is there a quick follow-up?

Matt Ramsey
Analyst, TD Cowen

Nope, that's great. Thank you so much.

Operator

Perfect.

Thank you, and would you like to go to another question from the web?

Please, Jonathan, that'd be great.

Certainly. One moment for our next question. Our next question from the web comes from the line of Stacy Rasgon from Bernstein . Your question, please.

Stacy Rasgon
Managing Director and Senior Analyst, Bernstein

Hi, guys. Thanks for taking my questions. Greg, first, I wanted to ask a little more about the merging of the Gaudi and the GPU roadmaps. What exactly does that mean? What functions or capabilities are you bringing from each of those architectures into the merged roadmaps, and what are you giving up? I'm still just unclear what that looks like when it comes.

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Yeah, good, good question. So, let me see how... I don't want to go too technical, but I, I need to sort of say a few things, right? So one, one of the things Gaudis really good at is tensor operations, right? So the silicon, sort of think of it as an ASIC. The silicon for the Gaudi processor is highly, highly attuned to executing those math operations at very, very fast speed. In fact, every Gaudi sort of accelerator has 100 gig Ethernet built in, in and out, and we stick eight of these things in a sort of a server box that we have running in Developer Cloud. In fact, I'm building them out as fast as I can, as I can build them and get them and power them up and light them up. But essentially, you have eight Gaudi nodes.

Each is 100 gig per in and out independently in that chassis. So we have 800 gigs of data in and out, and we actually use an Arista Clos network fabric to interconnect all that. So the big thing about training is you guys have got to pump a lot of data into those accelerators to do those training algorithms. And so they've mastered this essentially IO subsystem, right? NVIDIA talks about NVLink, and they're gonna take their proprietary fabric and want the industry to adopt a proprietary fabric. We're just using Ethernet. You know, we're gonna do high speed, advanced Ethernet, and we can move that data around. We can put these, these eight-node units together into...

You know, I mean, I think, I think I read somewhere that 80% of the, DG, you know, DGX market is no more than 8 GPUs. So we have an 8-node Gaudi, but we're putting them together in powers of two. We can have sort of, you know. You know, you can do 8, you can do 16, you can do 32, you can do 64, you can do 128, you can do 256. We're building a 256-node cluster right now, and it's 8, 8 times 256 Gaudis will be available for somebody who wants to do really, really deep training on this with massive amounts of data, massive amounts of bandwidth, and massive amounts of speed.

So I've taken that architecture, the I/O architecture, the network architecture, the systolic array that does the advanced tensor operations, and I'm bringing the GPU core. Pat called them EUs. They're the execution units. Those are the 4,096 execution units you have in a typical GPU. So we bring those execution units into the same package, right? For all this, all the GPU goodness we've developed over the decades, right, but next generation on, on better process technology, so higher transistor density. We bring all that together into a package, and now you've got, if you want to program in SYCL or you want to do Triton and compile kernels down to run on a GPU architecture, that's fine, but we use the systolic array to do the accelerated math.

So it's really about a hybrid architecture to bring those technologies together, and that's probably all I should say at this point. But let's just say, take the best of both worlds we have, 'cause I want to be able to take my existing GPU software that's running today, that a customer might run in an Intel Developer Cloud, or they're running on a Xeon Max, and that will just port forward without any change onto the Falcon Shores. So it's really important for now, I'm pushing as many GPUs out to researchers and students and academics as I can, to get them to basically develop on the Xeon Max, and that'll just run forward onto the Xeon Max.

'Cause again, we'll use Triton and those kinds of languages to take care of that virtual ISA Layer to make sure the code just works, if you've invested the time to write the software.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Stacy, do you have a follow-up?

Speaker 24

I, I do. Thank you. Thank you. I mean, maybe this is a better question for Dave, I don't know. But, you know, clearly you're, you're moving from giving the software away to monetizing it. Can you give us any color, or even, even qualitatively, like, how big is the software revenue today, and, like, where, where does it go, like, in three, four, five years? What are, what are targets, or how should we be thinking about the evolution of that software or revenue stream?

John Pitzer
Corporate Vice President and Investor Relations, Intel

Yeah, Stacy, it's a good question. We can try to address it in Dave's session, but, you know, we haven't really gone into details about what the software standalone business model will look like. You know, Pat referenced in his Q&A that we might, may or may not have an event next year, for our outside owners. I suspect that if we do, this will be a subject that we'll address at that time.

Speaker 24

Got it. That's helpful. Thank you so much for letting me ask the question. I appreciate it.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Greg, one question I get, maybe as a follow-up to Stacy's first question. You talked about Max to Falcon Shores, that software compatibility. One of the questions I get asked often by our outside owners, all the optimization work we're asking customers to do for Gaudi 2, Gaudi 3, when Falcon Shores comes out, is that all leverageable into the Falcon Shores platform?

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

So the Gaudis, you know, we don't, it's not directly programmable in the way that a standard GPU would be done with CUDA or with the SYCL, you know, Data Parallel C++ language, which is really just a, it's an ISO standard extension to C ++ , so it's just standard C++ . So, but we essentially, we do have a sort of a, we call it a C compiler component of that. But what we do is we want, if a customer needs some special optimizations, we'll work with them to build their own little sort of thing, which is their own kernel, that will basically use our TPC compiler, which is the TensorFlow processor compiler, to basically compile to that. 'Cause it's essentially the, it's a VLIW, if you remember, Very Large Instruction Word architecture.

It's not a classic SIMD or SIMT GPU architecture. So it compiles down to that. But what we've done with those customers, we'll carefully assess the portability of that, to port forward into the Falcon Shores architecture, because we've already got that defined. We've got the virtual ISA, which is the, you know, sort of abstraction layer above the hardware. So, but most of what the customers are bringing to us today, they can just simply run it in PyTorch, and we take, we optimize it, as I said, to run effectively on the Gaudi accelerators without them having to change their code.

But may, we may give them some different, you know, tweaks about, okay, well, your static graph and your dynamic graph, you know, we might give them some recommendations on how to make that more general, 'cause sometimes people write things quickly, and they just sort of optimize for one GPU, and we want to sort of generalize it and say, "Well, if you generalize it, you can still run on your, your favorite, your other GPU, but you can run on, you can run on a Gaudi, and you'll be able to run on our future GPU." So we wanna kind of make sure customers kind of write the most portable code that they can write to take advantage of what's available in the market.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Jonathan, can we take the next question online?

Operator

Certainly. One moment for our next question. Our next question comes from the line of Vivek Arya from Bank of America Securities. Your question, please.

Vivek Arya
Managing Director and Senior Equity Research Analyst, Bank of America Securities

Thank you for taking my question, and thank you for the very informative presentations. Greg, I'm curious, as generative AI becomes the primary workload in the cloud, how much of the heavy lifting will be done on the accelerator, and how much will be done on the CPU? And what implication does that have on Intel? Obviously, the CPU will always be there, but if a lot more of the workload is being done on the accelerator, then does that extend replacement cycles of the CPU? What exactly do you see the implications on the CPU in the medium to longer term?

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

I think Pat sort of tried to address that in his comments because so you know, look, I mean, the capital investments you have to make to build these large-scale systems, whether it's in Azure cloud or AWS cloud or, you know, any other cloud or even somebody like CoreWeave, which is taking on billions in investment, you know, to build out capacity. And so the you know, the number of people can afford that, particularly the enterprise, which as we know, the enterprise or the edge computing, Pat talked about that, is you know, is very expensive. And but once you've trained these, let's say, a trillion parameter models, right? And we can do that with Gaudis as well. But once you...

But that's again, it's a big capital investment for my CFO to give me to build these things out in Intel Developer Cloud.... And so once you make that investment, you know, then you want to monetize that investment over and over again with getting as many models trained as you can. And so, but when it goes to the actual use cases, let's say, we talked about a program we did with Boston Consulting Group at Intel Vision. Essentially, they have a bunch of internal data that's proprietary. They're not going to go share that with anybody. They don't want to put it in the cloud, right? They trained on our Gaudi CPUs for that particular stuff, but they used ChatGPT train models, and they just specialized what they wanted to do, and then they did the inference on the Gaudis, right?

Because it's cheaper, it's better power. It's actually faster if you—you know, we, we, we published our MLPerf remarks about that or results about that. So in that sense, there's gonna be a lot of what we call secondary models, which are nimble models or fine-tuning of models that's gonna happen with proprietary data that's, you know, within the healthcare industry, the financial industry, what have you. So we think, you know, it's not, it's not at the edge. It's, it's, it's a data center level where everybody's already got all their data. They may not have it all in the public cloud, and they may not want to take it there. But it's I think what's gonna happen is the market basically develops into the, the giants who can train these expensive things. They offer it as a service.

Customers will pay by the drink or by the API call, and they'll... but they'll build their own chat models and things like that to use inside their environments. When I was at Citigroup, they would—remember RPA, Robotics Process Automation, was gonna revolutionize the industry? Mostly end up with a bunch of chatbots. So I think this, this—but they're not smart chatbots. These are gonna be much smarter chatbots, right, that you're gonna use for IT helpdesk, customer call centers, et cetera. So this whole market hasn't even begun to explode horizontally. We're still focused on the scale up, and it's the scale out where the Xeon and the scale out, where even our Arc GPUs are gonna be used at the edge, right, to accelerate the training, you know, to accelerate the inferencing, you know, hopefully using OpenVINO.

So I think that's, that's kind of what's happening. We got this sort of big pyramid spike, but the, the wings haven't fully unfolded yet to say, "Here's the rest of the market and how you address it.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Yeah, but that gets reminded, we, we've talked about AI, not as a market, as much as a workload that's gonna span across the compute continuum, whether that be cloud to network, to enterprise, to client, to edge. And in each one of those nodes, there's gonna be different silicon and software strategies to drive optimal TCO for customer, for workload, and we think we're well-positioned to capitalize on that. Do you have a follow-up?

Speaker 24

Yes. Thank you, John. The second question is on the, you know, this other silicon that I think some of your peers call the DPU or, or the Smart NIC.

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Yeah.

Speaker 24

And the perception is that, you know, there are a lot of x86 CPU cycles that are being used to process the overhead, whether it's security or storage or networking. But if we can just take all that overhead, which I think Google said is about a third of the CPU cycles, maybe it's a different number now, and put it on the DPU. What implication does that have on Intel? Because I imagine the DPU uses alternative ISAs, and it's a lot more competitive for you to... So does that change how much of the server CPU cycles I'm, I'm using?

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Well, so first of all, let me just give you just a factoid. So I mean, we Pat mentioned Stability AI is, you know, buying our Gaudis. You know, the number was 400, I think he mentioned. Those get put into, you know, sort of these 8-node clusters, as I said. So you got a lot of bandwidth coming in and out. Now, you know, so therefore, you got to move a lot of data in and out. Now, the Gaudis don't have an IPU, or we call it an IPU. NVIDIA calls it a DPU. But essentially, the whole... There's gonna be a lot of innovation over the next 18 months in fabric technology and optics, and the kind of things you need to do to move data quickly between...

Because massive amounts of data have to move between the CPU where you're preparing it. You're exactly right. It's a lot of preprocessing that happens before you go do the training step. So once you pre-process that data, you know, you got to write it to storage, or you got to take it off the storage and put it into the GPU, or send it directly from the CPUs to the GPUs or the AI accelerators. And so that, that bandwidth requirement is gonna drive the systems architecture, which will really, I think, validate the IPU, DPU offload engine, which is what you're talking about, is essentially a place to free up the CPU from having to do that transmit and receive processing, right? But the fabrics, the fabrics as well are gonna just get to... We will be doing terabits per second inside the CPUs, right?

So you gotta have you gotta be able to feed the CPUs as well. You'll be doing teraflops in the GPUs. So there's a whole impedance match issue that's going on right now. If you haven't read about the Ultra Ethernet Consortium, UEC, go read about the Ultra Ethernet Consortium because we're partnering with Google on that. And I'll just say that, we got, we got great IPU technology, and we're gonna be participating in this change in the way the fabric and the communication and the optics is done.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Vivek, I just might add, what, what you are characterizing as a risk, I think we actually see as an opportunity in, in large part. You know, as we bring down the cost per function, which is what all these architectural changes are doing, our view is that, that the number of new use cases tend to dwarf any deflationary impact. I think a good example is what happened with virtualization back in 2008, 2009. The concern going into that trend was that you were gonna be able to do on 10 CPUs what used to take 40. Effectively, what we were doing was bringing down the cost of compute in a market that wasn't compute saturated. Every time we do that, there's always a corresponding growth in new use cases and workloads that dwarfs any of the deflationary pressures.

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Yeah. And our IPU has an Intel technology in it called QAT, Quick Assist Technology. What it actually is, is compression, decompression, encryption, decryption, okay, in the IPU. So those are typically expensive work things, instructions to execute. Our CPUs actually do them quite well, but we took that technology, and we pushed it down into the IPU. So we can do very fast encryption, decryption in and out of storage, and very compression and decompression in and out of storage, and encryption across the fabric, particularly in these trusted execution environments, where you want to trust that the data is protected on the CPU, you want to trust that the data is protected across the network, and you want to trust that the data is protected on the GPU or the accelerator.

That's what this whole trusted computing envelope needs to look like. It's not just in a server or in a cluster, it's across the fabric to the accelerator devices.

David Zinsner
EVP and CFO, Intel

... Thank you very much.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Thanks, Vivek. Jonathan, can we have the next question?

Operator

I'm not showing any further questions in the phone queue at this time. However, as a reminder, if you do have a question, please press star one, one.

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Can we go back a couple slides? Because there was one slide-

John Pitzer
Corporate Vice President and Investor Relations, Intel

Yeah, of course.

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

I wanted to talk about. There's one that had the three circles. Get me back to that one. Just to kind of—I started talking about, like, you know, trust as a service, but this one right here is sort of the three target areas, right? We're not trying to be everything to everybody. Basically, we got unique IP in our hardware and our software, and with our Intel Trust Authority, to create this virtuous circle, I call it, which is the SaaS pull-through of hardware revenue. So we're selling the hardware. I can come over the top and sell the SaaS, but the demand for the SaaS software and the capabilities it provides pulls the hardware. And I did this at Sun Microsystems when I was there.

Basically, they acquired my startup company in 2000, and even today, I think Oracle is making a $1 billion run rate business on that technology that they bought, you know, that I had created. And so it's essentially this kind of virtuous circle where the hardware needs the software and the software needs the hardware. So this is sort of the model that we're using, and then we've got AI as a service. So Intel Developer Cloud is where all of our AI technology delivers first, and then we'll work with our OEMs and the enterprise, SIs in particular, to help them adopt that software for running on-prem.

We have our performance optimization, and I'll make an announcement tomorrow about a partnership with Databricks around how we use our Granulate technology to accelerate Databricks by up to 30% and reduce the AWS service costs that you'd have for running that with no code changes. We just do it all in the basement by accelerating the instructions.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Perfect. With that, we'll make another quick transition. But Greg, appreciate the time this morning. I think you'll be on stage tomorrow m

At 9:30A.M. So please, listen in either live or via the web. And with that, I'll bring Dave up. Thank you, Greg. Appreciate it. Saving the best for last.

Greg Lavender
EVP, CTO, General Manager of Software and Advanced Technology Group, Intel

Yeah, hardly.

John Pitzer
Corporate Vice President and Investor Relations, Intel

I think Dave has a couple of introductory slides, and then we'll move right into Q&A.

David Zinsner
EVP and CFO, Intel

Yeah. Also, the team wanted me to remind everyone that they can re-queue if they want. Even Stacy Rasgon's dog can re-queue if he wants. Or she, I don't know if it's a he or she. Okay, so I thought maybe I'd just do a few slides, tee this up, and then we'll do these quick and then go to Q&A. You know, the top part of this chart is really, you know, Pat's vision and really a lot of what he talked about in his keynote. It's around execution in terms of getting process technology back, in terms of getting products back to where we want them in the marketplace.

It's around building out the foundry business, and of course, what Pat talked a lot about today, it's about, you know, driving artificial intelligence across, you know, all workloads of compute. But underpinning all of that, that I focus a lot on is the financial discipline, and it's really around smart capital, around driving efficiencies operationally, and it's unlocking value where it's, where it's kind of trapped within the company. So if you go to the next slide, let me talk a little about, first about smart capital. This is something we talked a lot about at Investor Day last year, and there's really 5 kind of elements to smart capital. The first is that we build fabs first, and that's kind of obvious because they take 4-year lead times to build. But it's important because it's the smaller amount of CapEx.

It's also, the expense that gets depreciated over the longer, longer period of time, so it tends to have less impact on gross margins. So you'll always see us want to have white space ahead of, ahead of demand. And that's where a lot of the CapEx, that we're spending today is going. In fact, more than half of our CapEx in 2023 will actually be on shelves. The second is government incentives. Obviously, you know, the CHIPS Act in the U.S., the CHIPS Act or the CHIPS investment credit in, the EU. There's local incentives. Of course, there's the investment tax credit as well. So this is a key part of, and enables us to make the investments, both in the U.S. and Europe, to build out the footprint that Pat talked about, earlier.

There's customer commitments. That's another key element of our strategy. Pat talked about the prepay that we got this quarter. We also got a prepay associated with our partnership with Tower to build out one of their nodes. And so this is will be an important also element of our strategy. Not only does it you know provide cash as we're making these investments, but also you know it helps show commitment from the customer perspective in relationship to our foundry business. Now, financial partners, obviously, also a big component of this. We announced back in August, was it, I think last year, the Brookfield partnership, where they're co-investing with us in Arizona. We're spending almost $30 billion in that investment.

They're roughly investing about half of that with us, and then we both, you know, share in the financial returns of that investment. You're likely to see others of these as we go forward, these kind of partnerships to help augment what we're investing on our own. And then lastly, and Pat talked about this relationship we have with TSMC, but you know, we want to continue to have a foundry relationship. We think that's a good, smart way, smart capital, smart way of managing the demand and our supply. And so you-- we'll continue, I think you'll see us to use foundries in relationship with our own and this chiplet architecture that we're moving to, as Pat even mentioned, really enables even to strengthen that foundry partnership.

So in all, what that means is, you know, we think we can drive the 2022, 2023, 2024 CapEx intensity to this kind of mid-30s% of revenue. That should settle, longer term into more of a 20%-30% range, call it mid-20s. And then we expect to see offsets in the range of 20%-30% of our gross CapEx investment to enable us to get to this CapEx intensity. Next slide. Then on the operational efficiency side, I think we've done a lot here, and probably some of it, just because it's come in increments, probably, it's been hard for investors to really see the totality of it. We've exited 9 businesses since Pat came aboard. We saved about $1.7 billion annually from exiting those businesses.

We have a very regimented approach that we've established to look at all of our investments every half year or so, and rationalize where we're putting dollars, and, you know, where it doesn't make sense for us to be in, we're not shy around exiting those businesses quickly. The second key aspect of that, and we talked about this. Was it the end of May that we did that thing? Yeah, end of May, we talked about this, is our Internal Foundry Model. So this is essentially taking the manufacturing and technology development group and our foundry business unit and pulling them out and making them a separate business, much like all other foundries out in the marketplace, and really starting to hold them accountable to a P&L. I think you'll see as we...

You know, we'll segment report this way in Q1 of next year, but you'll start to see some, I think, meaningful improvement as we progress. I can already see, as we're talking to the team over there, and they're starting to look at, you know, where their margins are, where their spend is as a % of revenue, where their operating margins are, and they're already looking for: "Okay, I know what I need to do to improve that, to make the P&L look better." This is a powerful part of the story and what drives a lot of the $8 billion-$10 billion of savings we think we'll get by the end of 2025. And then lastly, it's just, you know, focusing on OpEx.

In addition to the portfolio optimization, there are a lot of things we can do to drive better efficiency. We cut out about $2 billion. Zinsner will probably beat that number when 2023 is done and dusted in terms of savings reductions. And, you know, we think there are more opportunities to drive efficiency within OpEx. In fact, some of it will be through the usage of AI. And so our goal then, you know, now that we've gotten the number to a more reasonable number, is to start to grow that at a rate that's quite a bit below what our revenue growth rate is and continue to see OpEx leverage over time. Then the last slide is the value unlock slide.

This is probably another area where I think gets underappreciated, is that this is a key part of our strategy, is to look at assets within our business, and if there's an opportunity to unlock the value by doing something different than just running it within our business, we'll do it. And so Mobileye example is probably the most significant one, where we IPO'd that last year, late last year. We did another offering this year, that the market cap on that business is $30 billion on its own. We've actually generated cash from that, which has helped us invest back in technology and products. We're likely to see some other ones that are similar to that over time. IMS is another example of it.

We got a $4.3 billion valuation. That's a mask-writing business. It's particularly well-positioned in the EUV space, and the valuation was great. We actually think it's gonna be a lot better over time, and I think so do our the partners that joined with us, Bain and TSMC. We got some cash infusion from selling a stake in that. And then, you know, we all hope that and expect that the valuation will increase significantly, as EUV becomes more and more critical in semiconductor manufacturing. And then lastly, it's just a whole host of, you know, partnerships that we've done that, you know, have moved spending and so forth off of our books, onto someone else's, who we think can exploit it better, in some cases.

Some of it's just relationships that we've built, like the Tower or RAMP-C with the government. So there's a whole host of things that we will do over time in this area to better optimize where the investment is being made and unlock value for shareholders. So with that-

John Pitzer
Corporate Vice President and Investor Relations, Intel

Perfect.

David Zinsner
EVP and CFO, Intel

Turn it to Q&A.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Yeah, great summary of the financial discipline. Let's open it up to Q&A in the room. We can start with Ben.

Speaker 26

Wait for the mic, right?

John Pitzer
Corporate Vice President and Investor Relations, Intel

Yeah. Yeah, please.

Speaker 26

Hi, Ben Wright, Susquehanna. Thanks, Dave, and John. Dave, you recently talked about the data center business being down less than expected-

David Zinsner
EVP and CFO, Intel

Mm-hmm.

Speaker 26

at the same time, talking about Gaudi, the pipeline being $1 billion

David Zinsner
EVP and CFO, Intel

Mm-hmm

Speaker 26

And growing into the you know, as you're going out throughout-

David Zinsner
EVP and CFO, Intel

Yeah

Speaker 26

the quarter. How much are those comments interrelated at all? And, you know, as we kind of look, you know, throughout the year and into next, what are the key milestones that, you know, in terms of Gaudi and then, you know, the data center improvement, should we really be focusing on?

David Zinsner
EVP and CFO, Intel

Yeah. Okay, so let me unpack it a little, Ben. From a... Just in terms of how a data center is doing, it will be down quarter-over-quarter. I don't wanna give anybody the impression that, you know, this thing-- that the business is gonna be up, this quarter. And a lot of that is actually our FPGA business had a huge backlog, and they're at the point now where they've kind of satisfied a lot of the pent-up demand. And so there was, you know, kind of a natural correction we were gonna see in the FPGA business, which we have seen. In addition, you know, we did expect some share loss, this half of the year.

In fact, we thought we would see share loss in the first half of the year. We actually held share better in the data center business than we thought. But, you know, we do expect to see a little bit of that in the back half of the year. And then lastly, there was a lot of inventory in the channel, in data center and, you know, much like client, by the way, but client kind of recognized it earlier and kind of worked that inventory through the system, and we're now kind of seeing the recovery from that. Data center was a little bit more delayed, and so, you know, we have to see the recovery from that.

So I think that we'll, you know, inventory digestion, we're gonna see for Q3 for sure, and likely for Q4 before we start seeing that turnaround. On the Gaudi side, we will see revenue this quarter, but it will be modest. You know, the billion-dollar pipeline's a little bit more of a 2024 story than it is a 2023 story. But we do expect to see revenue this quarter. We think we'll see more revenue next quarter, and then, you know, next year, I think we'll have something that is reasonably meaningful for us from a Gaudi perspective. We still have to build the pipeline up, though. You know, we talked about having more than $1 billion at the earnings call.

I tell you, right now, as we sit, it's a lot more than it is than it was at that point. Our expectation is, by the time we talk about earnings, in the end of October, the number will be a lot higher than where it is today. And, you know, the whole, the whole nature of a pipeline is you build it as much as you possibly can because you understand that the conversion is not 100%. It's gonna be some percentage lower than that. But if we can get a big number coming into the year, the expectation is we can translate that into meaningful revenue, for Gaudi next year.

Operator

Do you have a quick follow-up?

Speaker 26

Yeah, quick follow-up is packaging. Just wanted your take on it. You know, we talked about it with Pat a little bit, but it's kind of coming fast.

David Zinsner
EVP and CFO, Intel

Mm-hmm.

Speaker 26

The needs in the industry, some of your advanced techniques, the dynamics at which this could be material revenue, I was just wondering what your thoughts are. Is this how many months away are we from, you know, hearing more about it and seeing it in the P&L?

David Zinsner
EVP and CFO, Intel

Yeah. The good news with packaging is, you know, whereas the, you know, wafer business, it's, you know, but you're talking to customers, and then you're gonna see revenue couple of years out. Packaging, you know, from the time you're talking to the time you can turn revenue, can be, like, 3-6 months. And so we do expect that a lot of the traction that the team is getting right now will translate into revenue relatively quickly, and in fact, even in 2024. It's a good business. The margins are quite healthy. We have a good portfolio for advanced packaging, and advanced packaging is tight in the industry.

You know, customers are out there e really optimistic about that. That pipeline actually is building quite rapidly, too. You know, the only difference is that, you know, when you win those businesses, you win them in increments of $100 million, as opposed to wafers, where you win them in $1 billion increments. The real advantage beyond what we think is a solid business with good profitability is, it's a great cross-selling opportunity. You know, we get a lot of customers through packaging.

We show what we can do in terms of servicing customers, and then we, you know, cross-sell them with, with, you know, our wafer capability, and we think it's a great symbiotic relationship.

Operator

Perfect. Toshiya, again, your hand raised, please.

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

Thanks, Dave. I have two questions. First, on gross margins, I know you guys guided to, I think, it was 43% for the current quarter.

David Zinsner
EVP and CFO, Intel

Mm-hmm.

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

You sort of talked about December being up as well-

David Zinsner
EVP and CFO, Intel

Mm-hmm

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

given some of the tailwinds.

David Zinsner
EVP and CFO, Intel

Yeah.

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

As we go into 2024, and I know you're not gonna share a number, but how should we think about the headwinds and the tailwinds on your business? Because, obviously, you've got a lot of nodes, a lot of products ramping in a very short period of time, so how should we think about this one?

Speaker 27

Well, look, there are tailwinds, for sure, and there are headwinds. I mean, on the headwinds side, you know, we're gonna have this underload hangover, I think, all throughout next year, partly because we won't have ramped back up to full production by until later in the year, and partly because by the time you see the cost of production inventory go onto the balance sheet and then through the P&L, it takes some time. So, you know, there's an effect there. In addition, we are still, you know, driving a whole hunk of startup costs, and, you know, this 5 nodes in 4 years is not cheap to do, and we're doing a lot of nodes, you know, kind of stacked on top of each other.

David Zinsner
EVP and CFO, Intel

And so we're in the $ billions of higher startup cost than we traditionally run at because of the 5 nodes in 4 years. So those things are causing some headwinds. And in addition, you know, some of the products that we're releasing, which have, you know, better performance and so forth, also come with an incremental cost to them. That said, there are some tailwinds. Obviously, higher revenue always is good for Intel, because we've cut. You know, we're a high-fixed cost business, and so incremental revenue has good marginal profitability, and so, you know, we can, you know, drop that to the bottom line, and that helps a lot.

In addition, the new products, you know, as we migrate our way through process nodes and through products, as we do better in terms of the performance there, you know, the expectation is that, you know, we'll just do better in the marketplace. And so I think, you know, year over year, we'll see gross margin expansion. It may not be, you know, hundreds and hundreds of basis points next year. It may be more modest for the reasons I talked about from a headwind perspective, but we do expect margin expansion next year. I think as we progress, and we start to get past the five nodes in four years, we'll talk of, you know, Pat left us hanging there with the, you know, how quickly we'll do the cadence.

But I suspect that our startup costs will modulate. And I suspect that as we get to, you know, where we're at leadership, process technology, with leadership products, that's a whole different dynamic, that we'll have in terms of margins for those products, that, you know, we'll really start to see a lot of the margin improvement. And ultimately, you know, what Pat has said, which I subscribe to, as the finance guy, is, you know, our goal is to get to 60% gross margins. And we think we have good line of sight to do that, and in particular, given, you know, this, Internal Foundry Model, what we can drive in terms of efficiencies makes us even more confident, in our ability to get to the 60% margins.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Toshiya, do you have a follow-up?

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

I do. Thanks, John. On capital offsets, you know, 20%-30% of gross CapEx, I think on the earnings call, you said you might be toward the higher end of that, or you're increasingly comfortable that you could do toward the higher end.

David Zinsner
EVP and CFO, Intel

Yeah.

Toshiya Hari
Managing Director and Senior Equity Research Analyst, Goldman Sachs

Obviously, since then, you've had a couple of, you know, prepayments and things of that sort. So as you look forward into 2024, could you potentially do better than 30%, and how should we think about free cash flow?

David Zinsner
EVP and CFO, Intel

Yeah, potentially, it could be higher. You know, we'll have... The expectation is that we'll have a clear line of sight into what we can expect from the U.S. CHIPS. We already know where we're gonna hit from an EU Chips perspective. You know, we'll see what comes of the prepays. There might be opportunities to do better on the prepays. We'll have SCIP 1 also helping us next year, and then there's a potential for a second SCIP that might also augment the offset. There's a potential. There's gonna be an interesting dynamic on the foundry side. You know, as we do better in foundry, there's gonna be more demand on us on the gross CapEx side.

You know, we'll have to be balancing the demands from customers on the foundry side, in addition to our own requirements for wafers, with those offsets, you know, to see exactly what, you know, what ultimately, you know, we settle out at. But I feel pretty confident that we're certainly in the 20-30, and we're certainly biased towards the high end. From a cash flow perspective, you know, our goal is, you know, to get to breakeven in the more near term, so I would count 2024 probably in that time frame. And then ultimately, the goal is to get cash flow as a percent of revenue to be 20%.

I think, you know, as we kind of progress through next year and looking into the following year, we have an opportunity to start to move our way towards that 20%.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Toshiya, just to level set the question, what we said in last earnings call is '2023 plus 2024, we expect offsets at the high end of 20-30.' That was a change from 90 days earlier, where we talked about 2023 standalone net capital intensity being in the low 30s, and the reason for the change: Is there some uncertainty about when offsets hit? Is it the back half of this year, or is it next year? The implication being that we could do better on a one-year basis in 2024 than the high end of 20-30. Jonathan, can we take a question online, please?

Operator

Certainly. One moment for our next question from online. Our next question comes from the line of Christopher Caso from Wolfe Research. Your question, please.

Christopher Caso
Managing Director and Senior Analyst, Wolfe Research

Yes, thanks, sir. Let me take the question. Good afternoon. Question, Dave, is on CapEx, and recognize you probably don't want to be specific on the CapEx into next year, but if you give us a sense of some of the puts and takes... You know, you talked about that the CapEx this year was half shell. You know, imagine that's gonna come down a bit next year and transition over to WFE. You know, addition, is there any CapEx associated with foundry for some of these prepayments next year? And then, you know, any changes in CapEx that relate to market conditions, at least in qualitative terms?

David Zinsner
EVP and CFO, Intel

Yeah, so good question. Maybe answering your last part first, and then we'll kind of go in there. So the way I look at CapEx, I kind of bucket into three pieces. The first piece is the shell capacity, and, and in, in probably most cases, I'm sure there's an exception to that, we'll want to make those investments because the lead times for shell space are pretty long, and, and you never want to be caught not having the shell and then having the demand. That's the worst answer possible. So we'll always try to invest ahead in shell. So I try to keep that one untouched, you know, perhaps adjusting it around the edges, but mostly keeping it untouched.

The second piece is the investment we make in Oregon, for process technology, because we always want to make that investment because that helps us, you know, accomplish the five nodes in four years. So I definitely don't touch that. Then the third is what I would call the capacity. It's the high volume capacity, that we put online to meet demand. And that's the one we are always kind of course-correcting based on our kind of longer term view of wafer demand, where we think our share will be of that wafer demand, and then based on that, what we think we'll need in terms of fab footprint, equipped fab footprint, to be able to service the demand, and then not be over...

You know, not have too much capacity, but also not have too little capacity. And so we're always, every month, you know, kind of adjusting those things around the edges to make sure that we, that we're aligned with the expectations, there. As we look into next year, we still haven't quite got the, you know, full CapEx picture done. We'll have that probably by November, and, I imagine that when we have our January call, I'll probably be able to give you some pretty good insight into what, what we're doing. It's obviously is somewhat market dependent because, you know, that third piece, that, that, capacity-oriented aspect of it, is something that we, that we will modulate based on, based on expectations. You're probably right.

There probably will be a little bit less as a % of fab, you know, kind of, you know, footprint, call it investment, and a little bit more on the equipment side. But probably not significant. I think we'll make a pretty significant investment in clean room capacity next year, as we did this year, given you know, we were essentially behind in terms of our clean room requirements, and so a heavy investment is gonna be necessary next year. And you saw the picture of Arizona. You know, we're not done by any stretch of the imagination. We gotta still work on that.

And then, you know, the one thing I'd just say is our goal is to be in this mid-30s CapEx intensity, and we said, you know, that would be a 2022, 2023, 2024, kind of goal. And so, you know, 2024, that you can expect that that's generally what our goal is. And, of course, you know, we'll have a better sense of what the, the denominator, looks like, from a revenue perspective, as we get closer to the beginning of next year.

Operator

Chris, do you have a quick follow-up?

Christopher Caso
Managing Director and Senior Analyst, Wolfe Research

I do. Thanks, John. Just to dig in a little more on the startup costs. And again, you're not providing visibility beyond 18A now for what that is.

David Zinsner
EVP and CFO, Intel

Mm-hmm.

Christopher Caso
Managing Director and Senior Analyst, Wolfe Research

But just in general, as we get towards 18A. And, you know, in the past, when there was a tick-tock model, you'd have, you know, startup costs ramp one year, and you'd get them back in the following year. You know, with a tick-tick model now, you know, those are more constant. But I guess the question is, as we go into next year, is it an incremental increase in startup costs, or is it just, you know, a continuous headwind, you know, similar to what you had this year for startup costs?

David Zinsner
EVP and CFO, Intel

Yeah, it's I call it more the latter. It's just gonna be a significant year for startup costs, but this year was also a significant year for startup costs.

Operator

Thanks, Chris.

Christopher Caso
Managing Director and Senior Analyst, Wolfe Research

So, not really incrementally higher then?

David Zinsner
EVP and CFO, Intel

Probably not, although, you know, we still haven't built the plan for next year completely, so we'll have to look at that. But I would say, you know, first order is probably pretty similar.

Christopher Caso
Managing Director and Senior Analyst, Wolfe Research

Got it. Thank you.

Operator

Cool. We'll go back to the room for the next question.

Kunjan Sobhani
Senior Semiconductor Analyst, Bloomberg Intelligence

This is Kunjan Sobhani from Bloomberg Intelligence. Thanks for letting me ask a question. Today, during the keynote, Pat demonstrated a lot of good use case and applications for the AI PC, and talked about how analogous it is to the Centrino era. Does it change your long-term assumptions or projections, whether it be for upgrade cycles or consumption, versus what you outlined during the PC TAM event earlier this year? Because intuitively, it should accelerate or pull in-

David Zinsner
EVP and CFO, Intel

Yeah

Kunjan Sobhani
Senior Semiconductor Analyst, Bloomberg Intelligence

some of those metrics.

David Zinsner
EVP and CFO, Intel

Yeah. So we are gonna run that business with the assumption that it's got a relatively modest growth rate. We already have a strong share there, so, you know, I think share changes will not be significant in that space, and that's how we're gonna run the business. That's how we're gonna invest in it. That's how we're gonna think about it. But we recognize there are things that could catalyze this business for sure. One is the Windows refresh next year. That could be a strong catalyst, which would, you know, be an upside to the business. And I think for sure, if the Centrino moment stimulates a lot of demand, which is not unforeseeable, I mean, that is a possibility, then, you know, that could be a strong upside case for the business.

Operator

Do you have a follow-up or... John, can we take another question online, please?

Absolutely. One moment for our next question. Our next question comes from the line of Aaron Rakers from Wells Fargo. Your question, please.

Aaron Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Yeah, thanks. Thanks for doing this and, and taking the question. I appreciate it. So I, I guess I just wanted to take, maybe a, a step back and, and think about just the higher-level dynamics that's going on. There's, there's a, there's a lot of discussion out there around, you know, AI demand and, and traditional compute demand, and you guys have probably been asked this every which way. But I, I'm curious, you know, Dave, when you look at the data center business, how are you thinking about where we're at as far as kind of inventory correction or digestion, in the server CPU market? Maybe what inning we're in right now and how you think about the progression of the demand, you know, if that are- if that were to kinda play itself out over the next couple of quarters?

I mean, if anything you can share there of how you're seeing customers?

David Zinsner
EVP and CFO, Intel

Yeah engage right now and just what you're seeing from a server CPU perspective.

Yeah. So that's a good question. And I did mention, you know, we had a relatively tepid view of Q3 when we provided guidance. You know, Pat's comments that were, you know, above the midpoint, in some ways is also a client comment, but, you know, was in some ways instructed by how we were doing on the data center business, as we progressed through the quarter. You know, it seems to be going better, and I think, quite honestly, part of that is, and Pat talked about it in the keynote, is how good Sapphire Rapids is as a product, for AI workloads. I think that's helped, support that business, better than we expected.

My view of data center is, we're probably—I don't know whether I could quote an inning, but we've probably—we've got two quarters, whatever that is in innings, you'll, you'll try—you can translate, probably to go, before we're in a good place for, for inventory. I do think we're set up to do... to see a decent year next year. I mean, that will, you know, kind of work its way through over the back half of the year. You know, that then has that cleaned out in a way that client was cleaned out probably somewhere in the second quarter, and so, you know, there's a natural kind of lift you get, as, as inventory is digested. That—I think that will help, the data center business.

You know, Sapphire Rapids, followed by Emerald Rapids, followed by Sierra Forest, you know, we may see some revenue on Granite next year. I mean, we just got a whole wave of products that I think are just increasingly better, and, you know, meet customer demands in a way that I think will be very helpful and creates a tailwind effect, in terms of demand for, for our products. That also is gonna help. And then, as we talked about, as we build up this pipeline of Gaudi, through the year, most of that pipeline, you know, if it, if it turns to revenue, will turn to revenue, in 2024, that will be also a nice tailwind to the business.

So we're set up to see, you know, we'll have this kind of, you know, kind of volatility or what have you, or air pocket in, in the business that we're seeing right now. I think that goes away by the time we exit 2023, and 2024 should be a really good year for, for, for the data center business.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Aaron, do you have a quick follow-up?

Speaker 25

Yeah. Yeah, I guess I do. Thanks, John. You know, real quickly on the growth margin variable, you know, as we look forward, one of the things that I've always been a little bit confused by is that, you know, you started this year with the benefit of the elongation of the depreciation cycle and the equipment cycle.

David Zinsner
EVP and CFO, Intel

Yeah.

Aaron Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

I guess if I'm reading through some of the material, it seems like that benefit actually increases here over the next couple of quarters. Can you just help us appreciate the positive variable of gross margin related to that element of the accounting aspect? And that'd be helpful.

David Zinsner
EVP and CFO, Intel

Yeah, it's obviously been helpful. And, you know, we get the benefit from a depreciation perspective immediately, starting in day one. But partly because this stuff has to flow through inventory and back out the benefits as it relates to the gross margins, you know, take some time to show up on the income statement. And I don't know, did we quote a number that?

John Pitzer
Corporate Vice President and Investor Relations, Intel

No. What I was gonna say, Aaron, is when we gave our guidance for Q3 and we went through the sequential walk, we actually did not talk about depreciable life change, in large part because it's not a meaningful part of the explained sequentially from Q2 to Q3. It's really more about less pre-PRQ charges and less underutilization charges. So just keep that in mind. Now, we still have depreciable life inventory that will run through the P&L, so I don't want you to say it's not a benefit. It is absolutely a benefit. It just wasn't all that meaningful in the sequential walk.

Aaron Rakers
Managing Director and Senior Equity Analyst, Wells Fargo

Yeah, that's very helpful. Thank you, guys.

John Pitzer
Corporate Vice President and Investor Relations, Intel

Appreciate it. With that, we've ended our session. I want to appreciate everyone's attendance in the room, and thank you and everyone online as well. We'll see you in about 4 or 5 weeks on earnin.

David Zinsner
EVP and CFO, Intel

Sounds good. Thanks.

Powered by