Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.39
+0.85 (1.03%)
Apr 27, 2026, 12:26 PM EDT - Market open
← View all transcripts

Vision 2025 Event Day 2

Apr 1, 2025

Christoph Schell
EVP and Chief Commercial Officer, Intel

Good morning, everyone. To all of those of you that are online, it's great that you're joining us. For all of you in the room, it's even better that you're joining us. I had some people suggest last night to me at the bar that it might be an April Fool's joke that we're starting at 9:00 A.M. No, it wasn't, OK? It was actually serious. We'll probably talk about the 1st of April date and this event for a couple of more meetings today. I'm super delighted to have you here in what is Intel's Vision 2025 event. There's so much for us to talk about, and I'm super happy with the lineup that we have today and also then the subsequent meetings throughout the day. Before I start, let me express a huge thank you to all of you in the room.

You represent the Intel partnership, the Intel ecosystem that is so important for our success. Having you with us this week is super important for us. It informs how we design our roadmap. It informs how we will evolve from a go-to-market point of view. We are doing all of this, and I really want to make this clear, with your success in mind. Please, as Lip-Bu said yesterday, be brutally honest with us. Give us feedback. Tell us where we can do better. We have hundreds of enterprises in the room and online today, and that's how we want to engage. We want to be specific to your company, specific to your needs, specific also to your competitive pressures. There's an opportunity for us to accomplish this together and to really talk about what we did in the past year.

Now, we'll touch a little bit on this, but then really look forward and see how we can set the stage for a successful 2025. I will continue to use the word "us" a lot, and I told my team to do that as well. This is important for us to really have two-way conversations and explore opportunities together. Now, since Vision 2024, only a year ago, a lot has changed in our industry. A lot has changed in the world. Many of the conversations I had with you yesterday were not about technology. They were about tariffs and how to be ready for tariffs, how to think about supply chain differently. That is actually at the heart of the semiconductor industry. We learned all of this during COVID, OK, when our products were scarce and a lot of you were on allocation.

Now we're looking at it from a tariff point of view, and there's a lot of ideas in the room. Let's talk about that. Besides just the technology piece, let's talk about supply chain optimization. Let's talk about how to make products available. There's also been a lot of news about Intel. And I can tell you, hosting the communications team at Intel, I have a lot of journalists that called me since last Vision 2024 and asked for comments, OK? I'm so happy that we have Lip-Bu with us and that Lip-Bu had an opportunity yesterday to be on stage and to launch himself. I got a lot of comments last night on his resume, but I also got comments on his management style from you, but also from our own team. He's introducing himself to our team as well.

It was really cool to have him on board yesterday. Now, aside from all of these changes in the industry and within Intel, I think it's important that we lean a little bit back and reflect on what we actually changed when it comes to engagement with you. I know that some of these changes, while they will be from a long-term point of view, really beneficial, there is pain in the near-term execution. Let me touch on some of this and give you a renewed sense of purpose that we have for 2025. What did we change? Number one, we made a lot of portfolio changes. That is great because it helps us to focus from an engineering point of view. It also means that some of you had to change your plans, that you couldn't plan anymore with products that we are discontinuing.

I'm aware of the pain that is causing. We will be continuing to be super transparent with you. You heard Lip-Bu talk about this yesterday. We are not done. He will continue to look at the roadmap. He will continue to make differences between what he thinks is core and what is non-core. We will be super transparent with you. The second thing we did last year, we put a renewed emphasis on the x86 architecture. We actually opened up about how we think about x86, even making x86 available to some custom designs and semi-custom designs, something that is actually working out for us. I like the feedback that I'm getting from customers on our ability to design custom x86 SoCs within the chiplet world. Third, we are becoming more agile and more flexible. We feel that internally.

For some of you, we are still the 100,000-employee giant, OK? Yes, talking, being agile in a company that is as large as Intel is sometimes difficult, OK? It takes time. We have absolutely focused our resources. We have changed processes quite a bit. Now, on the go-to-market side, what I've been busy with is I've been busy over the last year to move more resources into my markets, move more resources closer to the customers. We've made decisions on pricing. A lot of the rebate programs that we have are discontinued, and we moved to upfront pricing. The idea here is to be simpler for you to actually understand what your net pricing is with us and to also take a lot of back-end resources out on how you engage with us. For us, the hit is on cash flow, OK?

Because we now pay you right when you place the order and when the product ships and the invoice goes out. I hope you recognize some of these as really important measures and important activities that we took to be stronger and to help you drive the outcomes that you require in your markets. Now, on the product side, super happy with the progress that we made on launching new processor families. We are, on the foundry side, really in the thick of things of making 18A a really good value proposition for fabless customers. We launched an x86 ecosystem advisory board, very helpful for the semi and the custom designs I was talking about.

There is a lot of collaboration with you on those designs, not just from a foundry point of view, but also from an Intel product point of view, with a lot of good exchange on intellectual property with some of you. Yesterday, you heard Lip-Bu that our work is not over, that we have to continue to do a lot of hard work. I think he was very clear about that. This transformation that we are in is going to be continued. I really feel that last year, we have set ourselves up to be on the right path. It is our number one job to make you successful. As you go through this keynote speech this morning, please keep that in mind and think about what you are hearing. What does it mean for you? Where do you want us to shape things differently?

Give us that feedback throughout today and tomorrow. Now, to dive in, I have a whole team here that will help substantiate what I just said. We're going to start with products. I think there's no better person in the company to talk about the Intel product strategy and the outcomes our products have for our customers than our CEO of Intel Products, Michelle Johnston Holthaus. Michelle, please come on stage.

Michelle Holthaus
CEO, Intel

Thank you, Christoph. Good morning, everybody. Vision is always one of my favorite events of the year. It's such a great time to have conversations with so many of you and an opportunity to really listen more about what you want, what you need, but most importantly, what do you desire to create, hopefully with the technologies that Intel will bring to market. It is also exciting to see the impact that Intel technology is making. What I love the most is that we're partnering with our customers to drive innovation in some very, very unexpected places, a few of which I'm going to highlight today. Let's dive right in. Everything we do starts and ends with you, our customers. We're laser-focused in understanding your biggest challenges, your needs, but most important, what are your opportunities?

Innovating our designs to ensure that we're delivering world-class products and solutions that help you accelerate your businesses. I talk a lot with customers, and I continue to be amazed at the incredible ideas and innovations that our customers come up with. Intel is powering innovative solutions across a variety of industries, from trailblazing startups to transporting people to new places to massively scaling out infrastructure across the globe. We're enabling those experiences and transforming businesses on the inside, but also pioneering experiences for your customers.

Ultimately, people and organizations around the globe are utilizing Intel-powered technology to do incredible things. That's what matters to all of us at Intel at the end of the day. I don't want you to just take my word for it. I'm going to highlight a few this morning. Earlier this week, I had an incredible opportunity to ride in a purpose-built autonomous vehicle created by Zoox. I sat down with Aicha Evans, the CEO of Zoox, to talk more about their technology and their partnership with Intel. Let's roll the video.

Aicha Evans
CEO, Zoox

Michelle, have you ever been in a robotaxi?

Michelle Holthaus
CEO, Intel

I've never been in a robotaxi. This is my first ride ever, and I couldn't be more excited.

Aicha Evans
CEO, Zoox

You ready?

Michelle Holthaus
CEO, Intel

I'm ready.

Aicha Evans
CEO, Zoox

Let's go. You talk to anybody at Zoox, we're very clear on what we're trying to do and why we're trying to do it. I tell them, hey, here, it's not about you getting an A. It's about making sure no function gets an F.

Michelle Holthaus
CEO, Intel

Oh, I love that.

Aicha Evans
CEO, Zoox

If everybody gets an A, but one single function gets an F, we all failed.

Michelle Holthaus
CEO, Intel

It is fun that this is run on the Xeon. You have four Xeons below your feet. If you think about technology partners, the role that they play, right, they can bring your fleet down.

Aicha Evans
CEO, Zoox

We need reliable partners, partners who are going to be around, partners who are transparent with us. We are transparent with them. Everything on the vehicle, all the driving is controlled by the compute on board. If the partner doesn't take the time to understand our ecosystem and what we're doing and why we're doing it, that's not something we can work with. You know, when I look at things like this, we did not design the Xeon with this intention. This is where the ecosystem and customers take amazing technology and figure out new and different ways to deliver it.

Michelle Holthaus
CEO, Intel

Inside every Zoox is four Xeons and two networking cards. I rode in that for 50 minutes. Just to give you an idea, we had a jaywalker walk right in front of us. We had somebody run a stop sign, and we had somebody try to cut us off. The car handled it all really, really well. It was really fun to be in this car and see our technology come to life with our customers. I'm excited to share that we have placed 25 VIP passes in the audience. If you look under your chairs, there's a chance to sign up to be one of the first riders for Zoox. I hope there's some winners out there. Oh, I see there's a winner there. Looks like this from behind.

If there's a seat empty next to you, it is absolutely OK to grab it. All right, congratulations to all of those of you who won. For those of you who didn't win, Zoox will be offering autonomous rides just in a few short weeks here in Las Vegas. They're also in San Francisco and coming to many other locations very soon. Please stay tuned for their public preview. We also have a Zoox car that will be right outside the showcase after we finish here until 12:30 P.M. if you guys want to get up close and personal. I promise you, you will not be disappointed. The way Zoox is changing the future of autonomous transportation and ride-hailing is incredibly exciting as much as it is inspiring.

Every one of our customers has different needs, different goals for their business, whether you're helping move people around the city like Zoox or you're helping members of a military branch make faster, better-informed decisions from the front lines. With that, I would like to invite Tyler Saltsman, the CEO of EdgeRunner AI, out to talk about how Intel-powered AI PCs are enabling on-device AI agents for both the public and private sector. Please join me in welcoming Tyler. Hi, Tyler. Welcome. Thanks for being here.

Tyler Saltsman
Co-Founder and CEO, EdgeRunner AI

Good to be here.

Michelle Holthaus
CEO, Intel

All right, Tyler and I actually met for the first time at Vision last year in Arizona. You had an idea, a dream that you wanted to bring to market that we talked about. Maybe educate the audience a little bit on EdgeRunner.

Tyler Saltsman
Co-Founder and CEO, EdgeRunner AI

That's right. What we're doing is we're building AI agents that live completely air-gapped on device, independent of the internet. What that means is we can build a better version of ChatGPT that never needs the internet, so your data is safe and secure. More importantly, it becomes hyper-personalized to you. For the military, what we're doing is we're building domain-specific AI for the warfighter. The reason is each warfighter is unique. For example, an F-35 fighter pilot requires a much different AI than a tanker or me as a logistician. I was in the Eastern European Conference in 2017. This vision is passionate to me. In the military, it's better to make the wrong decision immediately than the right decision too late, which can literally be the difference between life and death.

These agents are the ultimate compression function for knowledge. We can train on all this military doctrine, compress it so it lives right on an AI PC, never needs the internet. Now when things go sideways, the warfighters can make a much better decision immediately. It is not a silver bullet. However, that decision can be the difference between winning the fight and bringing our men and women home.

Michelle Holthaus
CEO, Intel

We talk to customers a lot about using AI, and there's a lot of fear around that. I have to imagine that was not an easy conversation to get the military to want to adopt this. Can you tell people a little bit about that conversation?

Tyler Saltsman
Co-Founder and CEO, EdgeRunner AI

Yeah, so we're starting with the Air Force. There's something called Nipper GPT. Think of that as the Air Force's own internal ChatGPT. We've been selected to power it. Now what we're doing is we're building, think of it as like the Air Force brain. It's a Llama 3.3 70B parameter model, fine-tuned on a data set of 30 billion tokens that we've crafted of all of military data. Now this model broadly thinks military. What we do is we distill this model onto devices, onto the AI PC. What we do is we augment this with Loras, which is a low-rank adaptation of a large language model. Think of it as like a Nintendo cartridge. Now that small model only talks to the information pertinent in that adapter.

Now I have that domain-specific intelligence between different missions, different use cases, different equipment, and different personas. That's how we get that personalized experience while keeping the data safe and secure.

Michelle Holthaus
CEO, Intel

Talk a little bit about how Intel's helped you over the last year to reach this goal.

Tyler Saltsman
Co-Founder and CEO, EdgeRunner AI

Intel's been amazing. What I really like about just the entire Intel ecosystem is they've been leaning in with our engineering team. What they do is they actually work with us. What they do is they help us optimize our different agents so they can inference more efficiently, utilizing Intel's OpenVINO toolkit, as well as taking advantage of the latest architecture.

Michelle Holthaus
CEO, Intel

Security is really important. Maybe can you talk about security as a function of AI, but also how having Intel as a partner versus maybe some of our competitors helps you move faster and be able to deploy more quickly?

Tyler Saltsman
Co-Founder and CEO, EdgeRunner AI

That's right. I mean, Intel owns 80% of the market for a reason. They're the most trusted by the DOD. They're the most stable, the most secure. It's just the best customer experience for us to live on the Intel devices. It's just been a great partnership. What the military prefers is joint forces operations. When they see EdgeRunner and Intel come together, that's what they appreciate, that better together story.

Michelle Holthaus
CEO, Intel

One of the things that we've all been talking about is we want to learn from our customers. Tell us a little bit about the last year, what we've done well. Most importantly, what would you like to see Intel do more of?

Tyler Saltsman
Co-Founder and CEO, EdgeRunner AI

You know, what I love about Intel is the vision of bringing AI everywhere. I'd say what we need to think about as leaders is making AI culturally aware. One of the biggest learnings was AI, it just can't be generalized. For example, the Air Force might as well be a different culture than the Army, which is a different culture from the Marine Corps. Taking this a step further, as we work with the IDF, I could take an Army model, translate it to Hebrew, but it won't be effective because it doesn't think like an Israeli. It doesn't capture that culture. As we build AI, we need to have that context aware AI. More importantly, getting that culture because language is nuanced. As we build AI that uses natural language NLP, like this conversation we're having today, we need to make sure that we're capturing that.

Michelle Holthaus
CEO, Intel

Awesome. Anything you would tell us to do better for you?

Tyler Saltsman
Co-Founder and CEO, EdgeRunner AI

Just say keep working with us. The fact that MJ met with me when I just had an idea and she's helped me bring it to life, just keep being who you are.

Michelle Holthaus
CEO, Intel

OK. I love that it took in the last year, we've gone from a concept, an idea, a conversation to an application that people can use and, more importantly, has been deployed. I think that partnership between Intel and EdgeRunner is really what made this a reality. If you're interested in seeing this, it's on the showcase floor, so you can come see it up close and personal. Thank you so much, Tyler.

Tyler Saltsman
Co-Founder and CEO, EdgeRunner AI

Thank you.

Michelle Holthaus
CEO, Intel

EdgeRunner is just one example of a customer using AI technology to innovate and disrupt. Another thing that inspires me is when we have customers bringing together multiple Intel technologies to provide their customers with a full platform experience. That's when it becomes really, really tough, that full solution. Take, for example, Softtek. They're a digital solution company based in Mexico. I'd like to welcome Blanca Treviño , who's the President, Co-founder, and CEO of Softtek, out to talk a bit more about the work that they're doing. Join me in welcoming Blanca to Vision.

Blanca Treviño
President, Co-Founder, and CEO, Softtek

Good morning. Thank you for being here. Thanks for having me.

Michelle Holthaus
CEO, Intel

All right, Blanca, can you tell the audience a bit about Softtek?

Blanca Treviño
President, Co-Founder, and CEO, Softtek

I sure can. Let's make sure that we have this as exciting as the previous session. Military, it's tough. It is tough. Let's make this happen. Softtek is a software engineering partner that drives organizations forward through technology to improve lives. We build, implement, and run technology for banks, airlines, manufacturers, retailers, professional sports organizations like Real Madrid, and many other businesses and government entities. We do this across North America, Latin America, Europe, and Asia. To achieve this, we have around 16,000 professionals. We also have Frida. Frida is not just a Mexican painter. It's our artificial intelligence platform. We have been developing Frida for the past 10 years. In 2024, we have been leveraging Intel's technology to enhance its capabilities. It has been amazing.

Michelle Holthaus
CEO, Intel

Wonderful. We have a unique relationship. We have a 360 relationship between Softtek and Intel. Can you talk a bit about that?

Blanca Treviño
President, Co-Founder, and CEO, Softtek

We do have that kind of relationship. Yes, we are Intel is our client. We are Intel's client. Probably more exciting, it's that we have this partnership in which we together, we bring solutions that accelerate AI transformation across Latin America and the U.S. That is exciting. We use Intel's advanced technologies to drive our clients' business forward. Our promise to clients is that we are not here to reinvent or reimagine, but really to drive results. Our clients are some of the most innovative and high-performing businesses in the world. Some of them are here. They have a vision that needs to come to fruition. We enrich that vision and make that possible. With Intel technology, we can deliver on that promise, whether it's to improve performance or optimize costs or create future-ready solutions.

Michelle Holthaus
CEO, Intel

Yeah, that's interesting. Intel's technology has enabled Softtek to deliver a variety of solutions, including Frida. Can you share a bit more about Frida and the role it plays in delivering the solutions for your customers?

Blanca Treviño
President, Co-Founder, and CEO, Softtek

Absolutely. We use Frida in different ways. Probably the first one is to enhance our talents and efficiency of our people on the application development lifecycle, from ideation and requirement definition to implementation, support, and modernization. That is where it gets exciting. We have been leveraging specialized architecture from Intel through Gaudi AI accelerators. It is at the heart of Frida generative application engine. This helps us significantly to accelerate most every aspect of software development lifecycle. Some of the results are really, really amazing. Up to 40% faster development cycles, better code quality, and a massive, massive developer productivity boost. We are talking about 35% improvements. Our clients are always impressed by that number, but it is true. With Intel technology, we can implement something that is simple, smarter, and reliable approach to boost results to clients.

Michelle Holthaus
CEO, Intel

That's impressive. Many people know I love a strong say-do ratio and driving results. Can you give us some specific examples about how you're using Frida?

Blanca Treviño
President, Co-Founder, and CEO, Softtek

Yeah, I talk about our people. Let me tell you, probably another example will be how we use Frida to really enhance the efficiency of our clients' businesses, function, and task. Such as automation processes, delivering real-time analytics, or deploying virtual assistants and agents that support specialists in the role. Let me give you a very amazing, probably, example. We are using Intel's AI PC technology as a medical assistant. One of our clients in Latin America, a major medical service provider, operates clinics in rural communities. Some of them in very remote areas with limited internet access.

By running a small language model directly on PCs, Frida assists doctors by providing very valuable information and supporting the patient's diagnosis. Probably there are so many we don't have time that we can share with you. I truly believe that this is something that keeps us really excited about technology, that we can create working together with our partners, particularly with Intel, and all the technologies that are enabling us, again, a simple, smart, and reliable approach to AI transformation and that every company is now involved in.

Michelle Holthaus
CEO, Intel

Yeah, I'm definitely excited about what the future holds for both Softtek and Intel. I look forward to trying out Frida myself.

Blanca Treviño
President, Co-Founder, and CEO, Softtek

Absolutely.

Michelle Holthaus
CEO, Intel

Thank you so much, Blanca, for joining us.

Blanca Treviño
President, Co-Founder, and CEO, Softtek

Thank you. Thank you. It was a pleasure.

Michelle Holthaus
CEO, Intel

I get inspired every time I hear about new and different AI use cases because they're popping up nearly every day. Hopefully here, you heard from three very unique customers who are all leveraging Intel solutions to innovate and create business opportunities across a vast set of industries. In fact, our products only come to life with the vision and dreams of our customers. That is why, as a team, our number one job is to be listening to you, our customers, and then engineering incredible solutions that fit your needs, quite literally enabling you to change the world. With that in mind, the Intel product group is focused on three key priorities to drive your success. First and foremost, winning in AI PCs and enabling you to capture the AI client opportunity from the edge to automotive to the PC and to the workstation.

Second, strengthening our data center capabilities across traditional data centers and workloads in order to help you maximize your existing investments, such as refreshing to recapture space, reducing power, all while reducing your total cost of ownership. Third, we've got to continue to innovate in AI, enabling the next generation of software and hardware while helping you future-proof critical infrastructure to leverage the power of AI through full-stack solutions. The commonality in all of these priorities is how we are designing and building technologies, not just to meet your needs of today, but to enable you to innovate further and, more importantly, to create the future. Next, you'll hear from my team how we're achieving these priorities from each business lead. Let's kick it off with CCG. Please welcome Jim Johnson.

Jim Johnson
EVP and General Manager of Intel’s Client Computing Group, Intel

Back up mic. Check, check, check. OK, welcome again, everyone. Thanks for the mic's off. Everything we do in client is to deliver customers' expectations so we can be successful together. The importance of our partnerships cannot be overstated, as you just showed. Together, we've successfully brought AI PC to life, shipping tens of millions of units and enabling AI features on device. It's a game changer for cost, for privacy, and for security, to reimagine what we can do with our PC. Our client strategy, as Michelle mentioned, extends well beyond the PC, including edge IoT devices and software-defined electric vehicles. There are common requirements across these three segments that include compute, graphics, and, of course, AI.

As we've stated many times, to start to build a great AI PC, you must first build a great PC: leadership in performance in all workloads, battery life, and most important, the strength of x86 software compatibility. Together, we build incredible devices. Our client portfolio this year is broader than any of our competitors, featuring the breakthrough battery life of Lunar Lake, coupled with the performance of Arrow Lake in mobile and desktop form factors. At CES, we just launched our AI PC architecture into commercial, extending our leadership with the most significant update to VPRO technology in two decades. There are thousands of Lunar Lake commercial VPRO devices being deployed in large enterprise customers as we speak. You can see behind me the type of feedback we're getting. I'll say it frankly, it's even exceeding our expectations.

We know how important security and manageability are for your business operations and your customers' business operations. We recently announced new VPRO services, our first on-cloud native off-prem solution for VPRO. If you see the demo next door, you'll be surprised at how easy it is to deploy with your customers. The biggest update in 20 years. In addition to VPRO software, our AI PC software continues to expand with the broadest ISV ecosystem in the PC industry, including hundreds of developers. We have brought a new class of AI applications and features to run on our PCs. The question now is, how do you find these applications? Yesterday, we launched a consumer AI PC app showcase, making it simple for anyone to discover what apps are enabled on Intel AI PCs. This is more than a list.

We have categorized them so you can choose the AI application to enhance your PC on how you want to use it in your way. We'll have a commercial version of this showcase in just a few weeks available to enterprises. Now, to share some examples of AI PC in the enterprise, please join me in welcoming Dunya from Deloitte.

Thank you so much to be here. We use Intel, and we're proud to work together in the marketplace to do the same for our clients.

You know, we're so glad to have you. Thank you for building this partnership. In building and testing these new use cases for AI PC, can you share a little bit behind your motivation?

I can. A few years ago, like many organizations, we knew that AI had the potential to transform operations and to generate value. We really wanted to tap into that power and help our professionals make more informed decisions, be more efficient as they deliver that value. One program that we're running right now looks at the power of the neural processor on Intel AI PCs, everything from on-device coding assistance to help our developers to IT help desk tier one support applications that run locally. We are also working with you on our multi-agent architecture, which is really going to help us accelerate daily workflow.

You know, it's exciting to me how you're embracing the new environments. You mentioned on-device, which is important to us, coding assistance, IT tier one help desk. How have these AI PCs helped you in your business and your customers' business?

First, it started with an idea. How could we empower our developers, make their day-to-day tasks easier, things like testing and code completion? There is really so much promise. AI inferencing without cloud or internet connectivity, proprietary data protection, and the portability of a smaller language model on the device. This also translates into a better experience for everyone. We recently piloted our tier one help desk application called TechSage. It uses Intel AI-powered PCs. It is designed to perform simple tasks ranging from password resets to more complex tasks like Outlook or Teams troubleshooting. Applications like these are really helping our professionals focus on more high-impact work. We are actually demoing it later this afternoon if any of you would like to join.

That's amazing. What's impressed me most is you've taken and made the complex simple. You can now take on the next set of complex tasks. Looking forward, what's next?

We see AI PCs as really a cornerstone of our workforce transformation journey, just like the multi-agent architecture that I mentioned. There's this need for more advanced computing at the edge, which is going to keep growing, unlock greater potential for improving workflow. We see AI PCs as a cornerstone of that. We're going to continue investing for innovation, for efficiency, and to deliver value.

Thank you for coming. I look forward to working with you another year. Thank you.

Thank you, Jim. Take care. Thank you.

Thanks, Dunya. Let's step beyond the PC. We've been purposely defining our client silicon roadmap to meet the requirements on the edge. Edge is the business of tailoring solutions to specific industry use cases. AI at the edge is being used in many, many ways. Each way needs a unique solution of hardware and software. One example is in health care. Radiation therapy planning is very time-consuming and labor-intensive. Using our edge technology, Siemens developed an AI-powered cancer treatment solution that accelerates the process with greater precision than in the cloud, a 35x speed up in inference performance on device, which reduces planning time, quicker treatment planning. The bottom line is clinicians can deliver timely personalized care to patients much faster. If I step back at the edge, all of our customers have unique needs like this, but they all care about a few things.

They need the right value, inference performance, so they can deliver performance per dollar better than the current solutions. To help the ecosystem go faster, as they help enterprises accelerate AI deployments at the edge, we just announced three AI-enabled platforms. With these offerings, we're able to unleash the power of an open ecosystem to accelerate AI at the edge. Now, finally, let's talk about automotive. I just returned from a trip to China, and I saw firsthand how the market has exploded and is exploding. The software community is highly energized, and they're moving at lightning speed. Automakers are delivering in-vehicle architectures and workloads that have become very PC-like. They use the same audio and display technology, but multiple audio streams, many more displays. They even have AI models like DeepSeek running in the car. Similar demands to PC, but at a much larger scale.

To address in these markets, we're going to share more of our upcoming roadmap and technologies at the Shanghai Auto Show in April. Let's just step back. We've delivered the roadmap we committed to you two years ago. Next up is Panther Lake. I'm personally excited about Panther Lake because it combines the power efficiency of Lunar Lake, the performance of Arrow Lake, and it's built to scale 18A and is on track for production later this year. Our client roadmap is the most innovative we've ever had, and we are far from done. We're continuing to invest and build revolutionary technology that, one, reshape consumer experiences, two, meet enterprise needs, and ignite industry transformations. Now, I'd like to welcome Kelleher on stage to speak how we're addressing your needs in the data centers. Thank you very much.

Ann Kelleher
EVP and General Manager of Technology Development, Intel

Thank you, Jim. There are so many exciting things happening within our client segment. It truly is an innovative portfolio. Thank you for joining us today. Let's jump into data center. Intel's full-stack AI strategy combines x86 CPUs and accelerators, supporting diverse workloads with software compatibility and hardware reliability. It's a three-pillar approach. This includes Xeon for general compute AI, Gaudi for AI acceleration, and custom x86 chips for specialized needs. AI spending is projected to reach $153 billion on GenAI and $361 billion on machine learning and analytics by 2027. In fact, we see customers leveraging Xeon both as host CPU in an accelerated powered AI system and as the best CPU for AI computation. More on this with our customer later. Data center demands are changing. Lower power, reduced cost, and a smaller infrastructure footprint are critical without sacrificing performance.

We are committed to returning to our customer-first co-engineering ethos to meet those challenges. We are committed to being a better listener to help address your biggest concerns. We acknowledge past gaps, including performance turbo up, and are actively addressing them to regain leadership. Ultimately, we want to support you, ensuring that Xeon remains the best CPU in the market for existing and emerging workloads. AI is driving innovation, but traditional enterprise workloads still dominate power consumption, accounting for 55% of data center energy demand by 2028. Businesses need efficient, scalable infrastructure that supports both AI and conventional computing. Balancing modernization and return on investment is key.

Our Xeon 6 portfolio is a step in the right direction, delivering industry-leading performance for the broadest set of workloads, including up to 40% higher performance than the previous generation, up to 50% lower TCO while maintaining superior performance, up to 50% better AI inference with one-third fewer calls than the competition. For AI hosting, Xeon remains the CPU of choice for GPU-powered AI systems. It is also the only x86 solution that is designed to meet market demands with enhanced processing power, I/O bandwidth, and memory capacity. When we talk about modernizing infrastructure, the focus is on consolidation and performance. Xeon 6 enables up to 10-1 server consolidation opportunity, reducing TCO by up to 68%. Oracle Cloud Infrastructure will double the number of calls per rack because of Xeon 6 with G-core, maximizing efficiency and capacity for their customers. Modernization is not just an upgrade.

It's a strategy. Great companies don't just build strong products. They bring world-class engineering to solve real problems. It's through some of our listening and co-engineering that we have developed a new memory technology, MRDIMS, designed to improve performance and efficiency of server and data center, boosting speed to 8.8 gigatransfers on Xeon 6, hand-in-hand advanced cooling technologies for better energy efficiency. We are delivering on our server consolidation efforts to directly support your infrastructure modernization. This will remain a priority. Looking ahead, our goal for our 2026 P-core and E-core Xeon lineup will deliver competitive performance per watt and undisputed leadership. We want to be your trusted data center partner, helping you navigate today's challenges while anticipating the future. The most exciting thing about our data center products is seeing them in action.

I'd like to welcome to the stage Luke Norris, CEO of Kamiwaza AI, and Sunny Wescott, a Federal Chief Methodologist, who are using Intel technology to transform emergency readiness. Hi, Sunny and Luke. Wonderful to have you with us here. I see you brought a system with you. Before we dive in, can you explain what you both do in your own words?

Sunny Wescott
Chief Meteorologist, U.S. Government

Thank you. I'm a Federal Chief Meteorologist specializing in analyzing weather patterns that impact critical infrastructure and emergency preparedness across the nation.

Luke Norris
Co-Founder and CEO, Kamiwaza AI

Thanks for having us. I'm the CEO of Kamiwaza. Our AI orchestration engine helps customers automate the process of unlocking insights from their data using the latest AI models and techniques, eliminating barriers to AI adoption like vendor lock-in and data security concerns.

Ann Kelleher
EVP and General Manager of Technology Development, Intel

With weather patterns changing, how important is it to predict the impact of changing or more severe events?

Sunny Wescott
Chief Meteorologist, U.S. Government

It is incredibly important to not only predict the events, but to ensure emergency managers understand them. My work involves examining massive amounts of historical weather data to forecast threats to public safety and operations by translating that complex meteorological data into actionable intelligence for emergency responders to save lives and protect communities. The research on impacts to operations from changing barometric pressure systems left me struggling with the need to process 90 years of ASOS sensor data across the country. Most of that data was in Genpak, an outdated and complex format that made extraction nearly impossible with traditional tools. The research is actually fueling a thesis paper being facilitated by the Naval Postgraduate School's Center for Defense and Homeland Security. This work aims to give our emergency response teams a faster way to understand the impacts from weather anomalies like hurricanes, floods, or extreme weather. Manual correlation of this information typically took days, and non-technical users couldn't access the insights. We needed an AI-driven approach to turn legacy data into actionable intelligence in minutes, not days.

Ann Kelleher
EVP and General Manager of Technology Development, Intel

I can see how that would have been a big problem to solve for. Luke, can you help and tell us how Kamiwaza is powered by Intel using Xeon and Gaudi?

Luke Norris
Co-Founder and CEO, Kamiwaza AI

Definitely. Let me show you.

Ann Kelleher
EVP and General Manager of Technology Development, Intel

Great.

Luke Norris
Co-Founder and CEO, Kamiwaza AI

What you can see here is our software running on our Gaudi test server. It's got eight Gaudi, a terabyte of VRAM. You can see here in the cluster, the catalog, we actually have no data that's already loaded. With that, I'm actually going to kick off this prompt. It's going to tell it to go to our favorite weather site and download a day's worth of data. We deployed our platform in collaboration with our partners from government acquisitions on Intel Gaudi 3 and Xeon 6. Performance and energy efficiency was paramount. Our AI agents autonomously process historical Genpak weather data utilizing MCP, Machine Context Protocol, and browser use. The agent converts it into modern Parquet format, which loads the data into the system for retrieval augmented generation capabilities.

What you're actually seeing here is the Gaudi system with a large visual model reaching into this laptop, kicking off that browser autonomously, going to the web page, inferring from the web page, and now actually downloading data. It's just finishing up the second file there. Now that we've actually downloaded all the data, I'll simply upload it into the catalog. Let me see. And just now, this was one day of processing. Emma White, the data scientist who did this in production, had to wrangle 33,000 files dating back to 1933, cleansing the data, removing null values, and other incomplete data, making everything accessible via natural language interface. This resulted in 1.3 billion rows of data, literally a trillion data points. This, even a year ago, would have taken a large team, months and months of work to accomplish. As you can see, we're fully in the data catalog, and I will just pop over and refresh the screen. Oh, regardless. Live demo.

Ann Kelleher
EVP and General Manager of Technology Development, Intel

Wow. Yeah, that's nice in live demos, but that's such a better way to interact with the data. Thank you for sharing. Sunny, can you talk about the benefits you have seen from using the Kamiwaza technology?

Sunny Wescott
Chief Meteorologist, U.S. Government

Yeah, absolutely. Initially, given our focus on the environment, efficiency was extremely important to us in selecting an AI solution. When Luke and Kamiwaza showed us Intel's capabilities, we knew we could get the energy and water efficiency that we needed. Using this system, we were able to analyze historic weather patterns and predict their impacts on critical infrastructure and emergency response capabilities and share that with our field team.

Luke Norris
Co-Founder and CEO, Kamiwaza AI

As you can see here, this is the actual app post-processing. We selected Columbus, Ohio, and we have a weather system coming in at 940 millibars. We can simply generate the report and hand it to Sunny.

Sunny Wescott
Chief Meteorologist, U.S. Government

Absolutely. This takes the data one step further. This retrieves news archives to provide real-world context for federal, state, and local responders to engage ahead of the impact. This frees those analysts to focus on emergency response, not on data wrangling. As you can see on the screen, the user can immediately interact with data from previous states when a storm of similar intensity had impacted the area, what damages were reported, what actions could be taken to mitigate them, and some considerations for engagement with sector owners and operators. We are now expanding this service to other agencies, showing how Intel hardware enables AI to turn historical data into life-saving insights.

Ann Kelleher
EVP and General Manager of Technology Development, Intel

Thank you. Thank you so much, Sunny and Luke. Great to have you here and to see how Intel Xeon and Gaudi is helping to solve real-world challenges.

Luke Norris
Co-Founder and CEO, Kamiwaza AI

Thank you.

Ann Kelleher
EVP and General Manager of Technology Development, Intel

To wrap up, we will continue to listen to your needs and work together to solve real challenges and problems, those problems as we saw as an example. We want to exceed your expectation. Now, I want to welcome Sachin Katti, who will address the broader AI data center and networking portfolio.

Sachin Katti
SVP, Chief Technology & AI Officer, and General Manager, Intel

It's great to be here. An absolute pleasure. Today is April Fool's Day, and we've been talking a lot about AI. I woke up today morning and I wanted AI's help in writing a joke that I could use today. I said, hey, I fed it Lip-Bu Tan's transcript from his keynote yesterday. This was ChatGPT. I told it, give me a joke about AI and Intel and make it relevant based on Lip-Bu Tan's transcript. Here's what it came back with. AI wrote this keynote, but then it talked about Moore's Law and Xeon and AI PC 47 times, but never once asked what the customers wanted. Like Lip-Bu Tan said, not only do we have to reprogram the humans in Intel, we also have to reprogram the Intel AIs to do two things. Step one, listen.

Step two, don't ship any products without talking to you all first. Like every joke, every good joke, there's a grain of truth to it, right? This is what we need to do. We need to learn how to listen, how to co-innovate with you, how to anticipate your problems, and help you navigate the challenges as you go through it over the next decade. This is what it boils down to. How do we rebuild that culture? How do we rebuild that engineering excellence to drive innovation with all of you? Let's talk about how we are thinking about doing that for AI. That's obviously the major destruction amongst us. We need to take a step back and think about how we are going to approach this transformation that we are all going through. We're still in the early days of AI.

The journey ahead is as exciting as it is challenging. Lots of excitement every single day. New models coming out, agentic AI coming out, and increasingly the expectation that AI will come into our physical world with robots and humanoids. If you take a step back, what you see is that it is still the domain of a select few. It's only accessible to those with access to high-end infrastructure, lots of power, nuclear reactors even, and skilled talent to manage this infrastructure. It's not really become accessible to the whole world. This is not unexpected. Every big transformation that we go through has that same characteristic. It starts with very complex systems at the start, and then we figure out technology to make it accessible to everyone. Think about electricity itself.

It started as very complex power stations with electricity accessible only to a few select institutions near to that power station. Then we invented the power grid. That really made electricity and energy accessible to everyone. We went through this for the internet and search itself. Some of you remember the search engines before Google. They used to be served out of mainframes, vertically integrated mainframes, could not scale to handle the demands the internet was placing on them. Google came along and built a distributed cloud infrastructure using off-the-shelf PCs that they bought from Price in the Bay Area. Later on, those are actually Intel PC chips to start with. Later on, they used Intel Xeon processors to build a distributed scale-out infrastructure that could really deliver search to the whole planet. This is how innovation becomes accessible to everyone, right?

We start with these big, complex mainframe-like systems that are the domain of a select few. Then we invent technology that really makes this accessible to everyone. We do not think AI is going to be any different. AI has to go through the same transformation that we have been through in these kinds of disruptions in the past. We have to build distributed scale-out architectures for AI that make it possible to deploy AI everywhere and make intelligence accessible to everyone. That is going to be our mission. We need to talk to you, listen to you on how do we work with all of you to make that happen. Several conversations yesterday all about how we need that to happen for the whole ecosystem, for the whole world, and how Intel needs to be stepping up its game in making this happen. That is our promise.

That's what we'll be focused on over the next couple of years. You'll hear more about it in the next few months. I wanted to talk today about our current portfolio and how it's already taking a baby step in that direction. I want to start with Gaudi 3 AI accelerators. These are already in the market, and we designed them with a systems-first approach. Lip-Bu Tan was talking about how we want to start changing our culture to software-defined hardware. The workloads that you need to run, how do we work backwards from them to actually figure out what systems, what hardware we need to build? Gaudi 3 is not a general-purpose GPU. It is not meant to address all your workloads. It was really designed to address some of the most important workloads for you, which is around inference and fine-tuning.

How do we make sure that we can deliver that with great performance and cost efficiency? Instead of us talking about these qualitatively, we asked a third-party firm to go and benchmark this on IBM Cloud. IBM Cloud is hosting Intel Gaudi 3s now, and we asked them to go and compare the performance against Edge 100 and Edge 200. Compared to the NVIDIA Edge 100-based systems, it delivers up to a 2.5x better TCO for many of the common models that we use today in chatbots and AI agents. It delivers essentially similar to better performance as Edge 100, but at a much better cost-effective price point. You must be wondering, that's Edge 100, but most of the deployed systems today are on Edge 200. How do we compare against that?

Even against the Edge 200, on smaller models like IBM's Granite 8 billion parameter model, the Gaudi 3 was 60% more cost-efficient than the NVIDIA Edge 200. Similar performance, much better cost efficiency. Now, what about larger models? These models do not fit on one card. They have to span the whole system often, an entire server of eight cards, for example. On larger models like Llama 3 with 405 billion parameter models, Gaudi 3 provides up to a 30% cost advantage compared to Edge 200 on the IBM Cloud for the same performance. That is the mantra, right? How do we actually think about what your workloads are, especially with agents that are going to be combinations of models running inference and fine-tuning, and how do we deliver accessible, cost-effective performance for those applications? That is going to be the theme going forward.

What I want to talk about next is how we are actually making this translate to business outcomes. It's not just models. Models don't deliver business outcomes. You need applications. You need solutions that actually deliver those business outcomes for all of you. We are collaborating with our partners like Inflection AI to create enterprise-grade AI systems powered by Gaudi and delivered through the Intel Cyber AI Cloud. The Inflection AI solution empowers employees, our employees, enterprise employees, with a virtual AI coworker, which is specifically trained on your company's data, on your unique set of policies, on your culture. This is going to be critical. We can't have generic AI agents. We need AI agents that work in our enterprise settings, understand how we do work, how we interact with each other. That's what Inflection AI is bringing to the table. I'd love to welcome Ted Shelton here, Chief Operating Officer from Inflection AI, to share more about their journey and their partnership with us. Welcome, Ted. I wanted to start with asking you to tell us about what Inflection AI is doing, how your solution is actually bringing secure, cost-effective, customized AI solutions to the enterprise.

Ted Shelton
COO, Inflection A

Yeah, thank you, Sachin. We've heard a lot this morning about small AI, AI running on your AI laptop, AI on the edge, AI in the car. What we're talking about now is scale AI, so an AI that is specifically configured to serve 10,000 or 100,000 employees. Scaling AI, you need to really focus on that efficiency. That's why we've worked really closely with Intel's engineers to optimize every single aspect of the way the pipelines and the inference runs on Gaudi hardware. In fact, I'm happy to say that we are now running our models at a performance level that is on par with the H100 on the Gaudi 2, not just in the Gaudi 3. We're starting that work on the Gaudi 3 now.

Sachin Katti
SVP, Chief Technology & AI Officer, and General Manager, Intel

Maybe I should have asked you to do the benchmarks. We talked a lot about the growth of AI. Tell us a little bit about how we are co-engineering to actually deliver this kind of cost efficiency and security to the enterprise. What has your experience been working with our engineering teams?

Ted Shelton
COO, Inflection A

First of all, it's been a fantastic partnership. Thank you so much for that because we wouldn't have gotten there on our own. I think we've made a huge contribution as well. Our engineers have helped move PyTorch libraries. We've helped think through some Habana efficiencies. There's very low-level work that gets done here. There's also high-level work that gets done because I think at the end of the day, one of the most important efficiencies that we want to talk about is actually how do we make your workforce more efficient?

Our goal is that your dollar spent on AI is $100 worth of value or more for your employees. We've actually learned a lot from your own AI journey at Intel, how you're transforming your company. We are bringing a lot of those innovations, like allowing employees to actually talk to your data and get insights right away. We are bringing that to all of you and to your enterprises.

Sachin Katti
SVP, Chief Technology & AI Officer, and General Manager, Intel

Yeah, now really thrilled about the partnership and how we are co-engineering with you. I think that's the change we need to drive with all of you. How do we actually co-innovate and solve your problems? Exciting stuff with AI agents coming. What's next for Inflection AI?

Ted Shelton
COO, Inflection A

You mentioned in the intro about the fact that we're really thoughtful about making the AI private and personal to you, to your company, and learn from your company's data because we think these AI agents are going to be more valuable the more they know about your industry and your company and the specific functions and processes in your business. One of the really exciting things in our R&D lab right now is continuous fine-tuning so that every time your employees are interacting with AI, we're making the AI even better and even more customized and personalized for your business.

Sachin Katti
SVP, Chief Technology & AI Officer, and General Manager, Intel

Fantastic. Thank you, Ted. It's been a great partnership.

Ted Shelton
COO, Inflection A

Thank you, Sachin.

Sachin Katti
SVP, Chief Technology & AI Officer, and General Manager, Intel

I think, as Ted mentioned, really exciting things to come forward. This era of customized inference and enterprise AI agents really tells you that inference is going to be that payoff for all of those AI investments. As the world starts adopting agentic AI, we are going to need to change enterprise workflows, how we actually run most of the common things we do in our day-to-day. That means that we need to have AI infused in all the products that you use. It can't just be in a data center. We need to offer different modes and sizes of compute capability to meet a range of these needs because agents are going to be running on your laptops, on your desktops, on your phones, in your private data centers, and in the public cloud.

In that context, we are making sure that we can infuse cost-effective, power-efficient AI capability in our entire portfolio. Our Xeon family offers the best CPU experience for AI in the data center for inference workloads, really allowing you to reuse and maximize the investment you need to make for general compute infrastructure anyway. You get all the benefits of the mature and the open software ecosystem that Xeon brings to the table. Similarly, on the AI PC, Jim talked about Core Ultra and all the innovation that we are packing into that SoC. You heard about the integrated NPU and the integrated GPU that provides AI processing power and local inferencing capabilities in an extremely power-efficient manner and all the cool applications that you could now run locally on the PC to make sure your data is secure.

We are also beginning to see the story in the network. We announced the Xeon 6 SoC for networking workloads at Mobile World Congress just over a month ago. This is one that powers the software-defined transformation of these networks. It allows you to run your networking as if it's a piece of software on general-purpose Xeon SoCs. In addition, we introduced AI acceleration capabilities in this SoC, significant ones that allow you to do a lot of heavy inference on the SoC itself. Our partners like Ericsson and Samsung, who are driving the software-defined transformation of the network, are now leveraging those capabilities to infuse AI into how the network itself operates, as well as make it possible to host AI workloads at the edge. Telco operators love this. They've been looking for a way that allows them to modernize and monetize their network.

With this software-defined infrastructure running on general-purpose compute, they're able to modernize their network, and they get that additional benefit that they now have an edge compute platform that is capable of hosting AI workloads. Verizon, AT&T, Vodafone, just to name a few, are jumping on this journey, launching this network transformation, but also future-proofing their network to be able to host and monetize the coming AI workloads. That's where I want to wrap up. I think we are in the very, very early innings of this AI journey. Over the next few years, there's probably even more disruption coming at us, whether it's because of AGI, agentic AI, which is already upon us, quantum computing, as Lip-Bu Tan was talking about yesterday, and 6G coming in a few years. All of these are going to be disruptive changes that are going to transform all of us.

Our mission is to make sure that we are here for you. We'll listen to you. We understand and anticipate the problems that these disruptions are going to cause and work with you and co-innovate with you in making sure that we can deliver the best technology and the best products to help you navigate this transformation. With that, I want to hand off. Christoph is going to be back to wrap us off. He has promised he's not going to tackle me today.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Well done. Well done. Thank you so much, my friend. Let's stay here for a second. The joke between Sachin and myself is always I always share about the content is brilliant. Because he has a history as a professor at Stanford, he sometimes tends to go over time. I really am very happy that you didn't do it this time. Okay? Thank you.

Sachin Katti
SVP, Chief Technology & AI Officer, and General Manager, Intel

I always listen.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Thanks, Sachin. All right. I think you heard from our product team, Michelle, Jim, Kelleher, Sachin, about how Intel inside the product of you, of our customers, is performing the outcomes that we drive and the differentiation that we drive. I want to turn the page now and talk a little bit more about great products, but great products in the context of Foundry. Okay? I have the pleasure of being across both Intel products, but also the go-to-market on Intel Foundry. I refer to Intel Foundry always as a startup. What I mean by that is you know that Intel has been manufacturing wafers for Intel products for many, many years. We have not really ever offered this at scale to fabless customers. It is a new muscle that we are creating. It requires us to understand how these fabless customers actually want to operate.

They do not want to operate the way Intel products operate. I refer to it as a startup within Intel. There is a whole different team that is working on that. From a go-to-market point of view, Intel Foundry Services is led by my colleague, Kevin O'Buckley. I spend a lot of time with him. I decided to have him on stage here with me. Kevin, come on out.

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

Hello. Hello, Christoph.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Okay. You have little eyes as well. Did you stay at the bar last night while I left?

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

Free coffee solves everything.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Free coffee solves everything. Kevin, I think I talked a little bit about this startup that we're kind of driving within the company. Maybe it's a great way to lean in by you giving us a bit of an update. What has happened in the last 12 months? Where are we?

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

I think you said it well. We are a startup. Our Intel Foundry business today is in the midst of a major transition. Exactly as you said, that transition is driven by the fact that we're moving from serving a customer of one Intel product to a customer of many. Our plan now is to essentially serve our foundry technologies to the entire industry of fabless companies. To do so, we're transforming to develop a very customer-centric culture. That changes things like how we even develop our technologies, the capital intensity we have, where we're going to put our capital in the ground, how quickly we need to ramp our manufacturing lines. Even the basic business process of how we operate as a team needs to change to be customer-centric to a broad spectrum of customers. We're even doing things like changing our organizational structure.

There's always work to do. Today, Intel products is our largest customer. We are very, very focused on delivering for them. We have announced deals with other companies publicly at this point, companies like Microsoft, AWS, and the U.S. government in the Department of Defense. Lots of customers, lots of negotiation to do, but much more as we drive this transformation.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Okay. Obviously, as we look at all that you have to offer from packaging to different nodes, there is one node that stands out. That is 18A. That is where you and I are spending a lot of time qualifying opportunity, qualifying funnel, but then also converting. How is that going? How is 18A looking at from a fabless customer point of view?

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

It's going well. Thanks. As a foundry, think of our technology as a menu of technologies. Our foundry is not just a thing for fabless companies, but for our customers that want the fastest compute performance. They want the highest memory capacity. They need, like Sachin, the highest memory bandwidth available. For us at Intel Foundry, our solution to that is a technology we call 18A. We're very proud to say 18A is actually the industry's first two-nanometer class commercial technology that combines two new fundamental elements in our industry. The first of those is called Gate-All-Around Transistors. It's a change to the architecture of chip devices that allows them to continue to scale and be faster. The second element that we're combining in our 18A technology, we call backside power. Backside power allows us to essentially improve the robustness of the power supply of the chips that we develop. Again, we're very proud that we're the first in the industry to bring those technologies together.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Yes. It's not always easy.

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

That's right.

Christoph Schell
EVP and Chief Commercial Officer, Intel

To pull something off that is set new.

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

That's right. I do need to say me standing up on stage and talking about the technology is important. It's important that the industry understands us and the transformation that we're making. Our customers aren't looking to see our PowerPoint slides. They're looking for us to show them hardware data. I'm very happy today to use this forum to announce that based on the hardware that we've delivered to our customers, we're now ready to announce that we've entered into what we call the risk production phase.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Do we have to be concerned about that?

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

Yeah, that sounds scary. I promise. Risk production, while it sounds scary, is actually an industry standard terminology. The importance of risk production is we've gotten the technology to a point where we're freezing it. Our customers have validated that, yep, 18A is good enough for my product. What we have to do now, the risk part, is to scale it from making hundreds of units per day to thousands to tens of thousands and hundreds of thousands. Risk production for us and what we're now doing today is scaling our manufacturing up and ensuring that we can meet not just the capabilities of the technology, but the capabilities at scale.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Okay. Okay. Exciting times because both of us are in sales. Now it feels like we're starting to convert. We still can't talk about this because a lot of customers are very tight-lipped about working with us. Is that correct?

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

Right on.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Yes, that's right on. Kevin, maybe last question for me. What's next?

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

I was thrilled yesterday when our new CEO, Lip-Bu Tan, came up and told us about our Direct Connect Foundry event. I just want to first reaffirm that. You are all cordially invited at the end of April, if you're able to, on the 29th to join us in San Jose at the Convention Center. We use that forum, similar to an Intel products forum like this, to talk about our product roadmap as a team, but also to bring customers out and talk about their experience in working with us at Intel Foundry. We will have a keynote from Lip-Bu Tan similar to this. Stay tuned. I hope to see you there in April.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Okay. Kevin, really appreciate you being on stage. I also appreciate you as an awesome colleague. Kevin only joined Intel only one year ago. I think it's a very close partnership between the two of us.

Kevin O'Buckley
SVP and General Manager of Foundry Services, Intel

Thank you. Thank you. Thanks, team.

Christoph Schell
EVP and Chief Commercial Officer, Intel

All right. Now that you heard the latest and greatest about our foundry, let's talk a little bit about how I see innovation not only happening on a product level or on a technology level, but really have go-to-market innovation as well. If you work with all these engineers, you don't want to lag behind. If you're not an engineer, you also want to talk about innovation. There are four key things that I want to highlight on. Number one, you heard this from Lip-Bu Tan, but I just want to underline this again. We are refocusing on the core. Lip-Bu yesterday even said that we might make decisions to spin off businesses that we consider as non-core. He hasn't really told us yet what this means. I got this question yesterday from a couple of you where we were at dinner.

We are working on that. I'll tell you, we will be very transparent about decisions that we make on businesses, decisions that we make on roadmaps. We also discussed last year already at Vision the importance of being organized from a go-to-market point of view by verticals. There are two verticals that stand out for me from an opportunity point of view. One is a government vertical. You heard, as announced last year, a US enclave deal. It's a very important deal that actually only a company like Intel can pull off. It's an R&D project and a manufacturing project, okay, with the US government. You also heard me talk about automotive. You saw the example that Michelle brought on. There's a lot of exciting opportunities in automotive, an industry that is being disrupted with new entrants into the markets.

A lot of companies trying to make the switch to EV. A lot of companies trying to make the switch to autonomous driving as well. We are at the heart of some of the innovation that is being driven there. From a go-to-market point of view, it's a very different way how we need to work with an automotive company than we work with a classic OEM that is in the PC or server space. Okay? We are learning the ropes on how to do that. We have created for both government and for automotive a team that is basically a business unit, but has also the go-to-market functionality embedded within the team. The second point, because of those targeted approaches, we have moved our marketing spend more to account-based marketing. You will see less spray-and-pray marketing efforts from us going forward.

We want to be very targeted on key accounts. We want that account-based marketing to really pin through the entire go-to-market value chain that we have to offer. You actually will see this week a new brand refresh. Stay tuned for that. Okay? You will see it throughout the event today. It's a bit of some best practices from the past, but put into the context of what I just described, a more vertically industry-informed go-to-market. I talked already about trying to be simpler for you to work with us, moving rebates away, the thing of the past for some of the programs we have, putting it upfront to make it easier for us to understand what the pricing is and also not having to have the backend to work with.

I think on the Intel Foundry side, the exciting piece for me is the combination of Foundry and Intel products. When we talk about custom or semi-custom opportunities, we announced last year, for example, an opportunity with AWS on an AI fabric chip that is based on 18A. We're also talking about custom Xeon 6 products based on Intel 3. This is something that only Intel can pull off. We have really ensured that this is a value proposition that we want to land with different customers. If there's interest to learn more about this, please, let's use the opportunity today and tomorrow to really talk about this. A big part of this is that we need to enable this ecosystem that Intel actually created many, many years ago to do more for us.

The opportunity to be more partner-centric, to pull you more into our go-to-market, and to almost outsource some of the coverage, but also the product development when we talk about systems, to system integrators, to ISVs is something that we really put into a priority play last year. I want more of that in 2025. I'm really delighted about our engagement with ISVs, with SIs. You saw the examples that Jim was talking about when it comes to AI PC, how we get the applications optimized on Lunar Lake and going forward on Panther Lake as well. We will keep on investing in this. We are moving our partner alliance programs accordingly. This has an impact on how we spend contra budget, how we spend MDF budget. We will transparently explain this to all of you.

The fourth focus is strengthening the regions, continuing to move resources to a more decentralized setup. That is something that I started three years ago when I joined Intel. We are continuing to do more. Last year, we announced a fifth region. This region is India. I decided to pull India out of Asia-Pacific and Japan for one reason. It is the highest growth opportunity of any region that we have in our space globally at this very moment in time. I wanted for there to be focus. You will see more of that. We will change the shape of our go-to-market. We will change the shape of how we manage regions and markets based on where I see a pulse, feel a pulse, and where we get a good return on our investments. The strength of the ecosystem is important.

That is why I'm telling you all of this, because I want you to pull with me in the same directions and see where there's commonality in strategy, commonality in coverage. One more example I want to give today talking about go-to-market innovation is strategic partnerships. Let me introduce this a little bit. You probably all know F5 as an application delivery and security leader. They are operating on the Azure Cloud Platform. We have been sitting together with F5 and with Azure. We talked about how to differentiate the partnership that Intel has with both companies and bringing that together in a partnership where three companies are teaming up. The idea was to give customers of F5 more choice when it comes to cloud, hybrid on-prem, and security.

We wanted to be able to offer that in a very consistent way across the enterprise customers that we have. Consequently, we then decided to do something that Intel has never done before. We decided to become an Intel managed service provider with Azure. We are not just deploying the tech behind the Azure infrastructure. We are now becoming a go-to-market partner to also resell, if you want, that infrastructure to targeted customers together with F5. That is the background. I think to explain this in a little bit more detail, I am super delighted to have Microsoft's Mark Linton and F5's Kunal Anand here with me. Please welcome them on stage. Mark, hi.

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

Hi, Chris.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Thank you for making it. Good to see you. Kunal, how are you doing? Thank you, Mark. Okay. I hope that intro worked for both of you.

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

It worked great.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Made perfect sense, even with my German accent.

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

Very clear.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Very clear. Super happy. Kunal, I'm going to start with you. Maybe introduce F5 a bit to us. Talk about your vision and talk about how you want to enable that vision.

Kunal Anand
Chief Product Officer, F5

Amazing. Thank you. It's great to be here. I'll start by just sharing a little bit about F5 and who we are. We are focused on application delivery and security. Right now, the world needs all of those capabilities. Today, when we look at the world of applications and APIs and the evolving architectures with AI, where these apps and APIs are evolving into agents, and we're seeing this dynamic environment play out, people are in these hybrid multi-cloud environments.

What they're really looking for is the ability to deliver these experiences with resiliency, availability, and, of course, to do it in a secure manner. We've been partnering with Intel for almost 30 years now. Our flagship product is called Big IP. Big IP has had all the great innovation from Intel over many, many decades. We have been able to bring all the richness and capabilities because of that partnership. We are really excited about the net new innovation, specifically around IPUs, cryptographic acceleration, et cetera. Thank you again.

Christoph Schell
EVP and Chief Commercial Officer, Intel

That's great. Mark, I mean, this is new territory for both of us. I mean, we know one another for a long time.

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

We sure do.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Designing ecosystems, designing go-to-market as well. This is different.

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

Yes.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Because now we actually become a go-to-market partner of a Microsoft product.

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

Yes.

Christoph Schell
EVP and Chief Commercial Officer, Intel

How do you feel about that?

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

Yeah, it is different. Look, we've partnered since the early days of a PC on every desk and in every home. We've seen the client server, internet. Now we've got cloud, and we've got AI. It is a natural progression of the partnership there. As you think about the innovation from the silicon up to the cloud, we can really optimize complete systems for customers and also ISP partners like F5 as well to ensure that our partner ecosystem can leverage that innovation and provide great solutions for customers as well. I think it's a natural progression, and we're really excited to support.

It's a natural progression on the go-to-market side. I think what's important for me is the learnings that we have. I want to bring them back into Intel and inform our roadmap, and then also maybe inform how we think about custom chips, okay, custom chips for Azure, custom chips for F5. It was important for me that you understand how this all hangs together and how it informs our roadmap going forward. Kunal, when we look at the opportunities that you're driving at, okay, how is this going to help you to grow?

Yeah, first of all, I'm really excited about what this partnership between all of us means, fundamentally, because it changes the way that we approach the market. It fundamentally changes the way that we go after and help these organizations globally protect themselves. What I'm really excited about in terms of what this partnership represents when you combine the best of Intel, the best of Microsoft with Azure, and, of course, with F5, is that it opens up this opportunity to meet customers where they are. We talked a little bit about Big IP at the beginning, but one thing that you may or may not know about is Nginx. Nginx is one of the world's most popular software-based ingress controllers.

What we heard loud and clear from our customers was to be able to deploy that inside of a cloud-based environment like Azure and to do it in a simple way. That is what we ended up crafting. Being able to support large financial services organizations and many others seamlessly. The best part about it is that all of this runs exclusively on Intel. Our customers are ultimately able to get the best hardware and the best cloud environment with the best delivery and security capabilities, all in one simple, easy-to-consume solution.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Okay. Obviously, the fact that it's one technology stack that you have to develop on, that makes life simpler for you as well.

Kunal Anand
Chief Product Officer, F5

Yes.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Okay. Obviously, it's a cost point as well. Mark, we talked about the go-to-market value. What other value do you see that Intel can bring to Azure and your cloud business?

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

Look, I think the integration is key. As you think about all of our customers, most of the customers I talk to are in a hybrid world where they want to have an efficient data center. They want cloud services and the scale that gets, but you want integrated security, manageability, compliance, and controls as well.

You want application optimization to be able to seamlessly move and, frankly, run those workloads where you want to run them. I think this offering helps us on the technology side. I also love that now you're an MSP. You can take more of a one-stop-shop approach for customers so that they can get all the support and services they need. Of course, our partner ecosystem, which is the bedrock of Azure. We have thousands of partners there that are trained and actively selling Azure for our customers. Really, our partners can take advantage of this partnership as well, whether they're building software and IP, whether they're more of a services and consulting as well. It's a great opportunity.

Christoph Schell
EVP and Chief Commercial Officer, Intel

It's a great opportunity.

For all partner types. I agree. For me, there's also a cultural opportunity within Intel. If we really think that we are as good as what we tell F5 and Microsoft, our salesforce should also be equipped to resell that managed service. That is what we are going to do.

Kunal Anand
Chief Product Officer, F5

Exactly.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Mark, I appreciate you being here.

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

Thank you, Chris.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Thanks a lot.

Mark Linton
VP of Device Partner Sales Organization for Global Partner Solutions, Microsoft

Thanks for having us.

Kunal Anand
Chief Product Officer, F5

No, thank you very much.

Christoph Schell
EVP and Chief Commercial Officer, Intel

Okay. Please give them a round of applause. Now we talked a lot about the transformation. We talked about products. We talked about Foundry. We talked about innovation in go-to-market. Let that all sink in. I think what you can expect from us in 2025 is you will get a really focused Intel. I think I really want to underline this again, what Lip-Bu said yesterday as well. We have work to do. We know that. We started the work a couple of years ago already, but it takes a village, and it takes time. Second, we're really invested in your success. That's very important. Please be brutally honest with us. The feedback that you want to provide, please provide it. Please compare us to competition. Please tell us what you require from us to be successful.

Thirdly, you can rely on us to manufacture and to design world-class products, that's the DNA that Intel was founded on. You can also rely on the fact that that will be done in an open standard mindset. I think that's the other thing that differentiates us. We know that we have work ahead of us and that there will be transformation. You can also rely that these three things are our foundation that we will not waver from. With this, I want to thank all of you, the people online and also the people in the room for getting up early in the morning and for joining us on this April 1st. For you online, this is the end of the session. For the people in the room, we have a special.

Powered by