Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.33
+0.79 (0.96%)
Apr 27, 2026, 12:27 PM EDT - Market open
← View all transcripts

Barclays Technology, Media, and Telecommunications Conference

Dec 6, 2023

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Tom O'Malley, Semiconductor and Semi-Cap Equipment Analyst here at Barclays. We're lucky enough to have Sachin Katti, who is the SVP and GM of the Network and Edge Intel. Very nice to have you here.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Thank you for having me.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

So, I think that when most people look at Intel, the Network and Edge Group is not the prettiest or the fanciest, but I think it's an area where people could learn a lot more. And so why don't you start by just giving us a little background about yourself? I know you've been, actually a little over two years and again, almost a year. So it'd be great to hear about more about your background and your experience, and kind of how you got to the position you're in today.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah, no, happy to. As Tom mentioned, I took the role to run the business back in February this year, so a little less than a year, but I've been at Intel for around two years. I'll still be the CTO for NEX. And I'll get into NEX in a minute, but my background is a bit, I guess, unusual. So I'm a professor at Stanford, and so I made the hard switch from academia to running a business around $8 billion last year. Pat likes to joke whenever he introduces me that he saved me from a lifetime of boredom in academia. I'm not sure I wanted to sign up for so much excitement, but it is exciting. So but nevertheless, before that, I did a couple of startups.

The last startup got acquired by VMware, and that's how I got to know Pat, and I came to Intel via Pat. So NEX, as you mentioned, is relatively less well understood as a part of Intel. So it's the third biggest business in Intel. Roughly last year, it was around an $8 billion business. But NEX was formed two years ago by putting together three existing businesses in Intel. So the first one was around Cloud connectivity and Ethernet. So this is the part of the business that builds Ethernet products into the enterprise and telco market, and now IPUs, the units. This is our version of the SmartNIC or the DPU, and that's being sold into Cloud. Obviously, Google Cloud was our definitional partner, customer in building the IPU, and now it's sold into many other cloud customers.

So that's one part of the business. The second business, which is a big chunk, is telco and enterprise networking. So selling a variety of Intel's Xeon-based products to run telco networks in a cloudified manner. So whether it's 5G Core or 5G Radio Access Networks and enterprise networking, enterprise security, SASE workloads, you name all of the typical enterprise networking and security applications running on top of Intel. So that was the second business that became part of NEX. And the third one, which you probably all are aware of, was previously called IOTG. That was the IoT business group. We renamed it to Edge because it's now becoming more of an Edge computing play that's much broader than just IoT applications, and so that is the third business.

So overall, that's what NEX is, a combination of Ethernet, Xeon and other products that are being sold into Networking and Edge computing markets. So, had been growing double digits for the last two years. Last year was an $8 billion business. This year, we are going through a painful inventory correction, and we'll get into that in a minute. But, yeah, super excited to be here.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Perfect. Yeah. So, I think a good way to kind of start is to dive into each of the three segments you kind of talked about. So can you just go into each, and you said it's been growing double digits. Clearly, you're in a cyclical correction right now, but traditionally, could you just outline what those three verticals grow at? What's the, you know, consistent long-term growth rate that we should be thinking about when we look at those three buckets?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah. Maybe let me deconstruct, because each of these-

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Sure.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

has different kind of market cycles here. So I'll start with telco, because that's the one that, as many of you are probably aware, that recent reports from the telco vendors themselves are going through a correction. This is not atypical. So every new generation of cellular standards, there's a lot of investment at the start of that cycle. Then they go through a trough period, and then the next generation comes along, when 5G comes along, for example. COVID-19 actually accelerated that. So because of COVID-19 and remote work, a lot of mobile infrastructure build-out that happened in 2020 to 2022, and I think both North America and Europe are going through a lull right now in terms of telco operator spend.

So that's that had been growing double digits, let's say low teens. This year and next year will be, will be... We are going through that correction, so we are not out of it. I think I will get into some of the announcements that happened, yes, the day before yesterday in, in a little bit. So telco is one segment. The Edge, it's too broad to pinpoint any one factor, but, in general, the Edge and the industrial and embedded markets are going through this inventory correction later than the PC and the server market. Again, you see this in other recent earnings releases from our peers of ours. The industrial and automotive kind of index companies are seeing that inventory correction beginning to happen now.

So those segments are going through that, but we have seen quarter-over-quarter sequential growth. So in Q3, we did see quarter-over-quarter sequential growth in our Edge business, so we are beginning to see that turn around and recover. That had been growing high teens before, right? And so we expect that to start normalizing into next year. Ethernet, I think, is indexed to the enterprise and Edge markets. Most of our Ethernet business today is in the enterprise segment. So that when it wasn't a supply constraint environment last year, I think people were buying every Ethernet adapter they could lay their hands on. That slipped this year. So a lot of inventory in the channel that is now being drained through. And that's what the, let's call it, below 100 gigabit Ethernet speed.

I'd say the high-end Ethernet, the X factor is AI. Today, AI is driving a lot of networking spend, but it's essentially to one vendor today. I think as Ethernet becomes an alternative for network and fabrics in the cloud for AI, that is going to be a big tailwind, but that's coming in the future.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Perfect. So I think that's a great launching point to the AI question, which I think is being addressed in almost every fireside here at the conference. But I've heard Intel mention this concept of hybrid AI. Can you explain what that means to Intel, and how you see that market developing at the Edge in the next couple of years?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah, I think, when you think about AI, right, so a lot, obviously, a lot of attention right now in the cloud as new big models are getting trained. But what we are seeing from customers is that as these models mature, and models like Llama 2 and others start coming out, these open source models, customers want to run inference at the Edge, and there's a variety of reasons, right? So it could be because the cloud is too expensive. They don't want to ship all this data back and forth. They don't want to, pay the API charges that, these big models have. I mean, if you talk to the startup ecosystem in the valley, we are all here in the Bay Area, I think a big chunk of the gross margin is sitting in just OpenAI, right?

So there is a large economic incentive to run a lot of this inference locally with these LLMs, right? So the Edge AI is being driven by that. The second reason, of course, is privacy, right? So a lot of these are now wanting—the prompts themselves contain a lot of private data, right? And now a lot of enterprises want to fine-tune these models with their own private data, and they want control over how this data is used, how the prompts actually get engineered, and what kind of data is getting exposed. So there's a big impetus to running this kind of inference again at the Edge, right? So both those factors, cost, as well as privacy and data sovereignty regulations, are pushing inference to the Edge. And so let me be precise.

We expect models to be trained in the cloud, but over time, the majority of inference will happen at the Edge. You know, that's already been happening with computer vision. We expect that to happen even in the large language model. If you look at Windows Copilot, for example, I think even Microsoft would want a lot of that inference with Copilot-like applications to happen on your PC, because they'd much rather offload the cost of serving that inference to your devices rather than have to run it off the floor, which is quite expensive for them. So coming to hybrid AI, just to close the part quickly. So you will run most of the inference at the Edge, but occasionally, you need the power of the cloud, right? So in the Edge, we expect, for example, we'll be launching Core Ultra next week.

That's our flagship laptop processor, that's going to be launched in New York next week. It's going to have all of the AI capabilities like an NPU and a GPU integrated into the CPU itself. So you no longer need a discrete part. It's all in one package, in one die, in the SoC. But this, we are now able to run a 30 billion parameter large language model like Llama 2 locally. So if you want to do the query, inference query on a 30 billion parameter model, you don't have to go to the cloud at all. You can actually do it on your PC, on an Edge computing device. You can run everything locally. We actually demonstrated this at Intel Innovation back in September.

So I think this capability is going to become commonplace, and with every generation of this silicon, the size of the model that we can run locally will keep increasing. But occasionally, you may need to go call a 1 trillion-parameter model in the cloud. That's just more capable, and so that's why we—what we mean by hybrid AI. Just like hybrid cloud, you will run a big chunk of your enterprise compute on-prem, but you will also have some of your compute sitting in the cloud. I think the same thing is going to begin to happen to AI, where a big chunk of the inference is going to happen locally, but occasionally, people want to use the cloud capability.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

I think there's a big debate around the percentage of where that compute takes place, whether it's mostly on the Edge or mostly in the cloud. One thing that gets brought up pretty frequently in my conversations is, right now, you really have one device, the smartphone, right? You're talking about potentially a PC processor or maybe new devices entering the market that you guys can penetrate. Can you talk about how you'd be able to penetrate the smartphone market, if that's where AI takes place? Or do you expect other devices to kind of be the tip of the spear when we're talking about AI at the Edge?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah, I think so. If you think about the PC, let me start there. PC is ultimately a productivity device, and AI today is mostly a productivity enhancer. If you think about the kind of things Microsoft and others are pushing towards, with Copilot, they're about: How do I make you more productive? So I think the PC is definitely a natural landing point for a lot of these cases. When I talk about the Edge, I'm talking a lot about the physical infrastructure around it, right? So not your devices at this time, but you walk into a McDonald's, right, the drive-thru store, when you're ordering, drive-thru, when you're ordering some food on McDonald's, that's now being AI-enabled rather than a human taking your order, right? Going into a factory, how do you do better kind of well detection, quality detection, these kinds of capabilities?

Going into a Home Depot, the customer service kiosk, instead of having to call a human, being able to engage with a bot right there and answering queries to you, right? So this kind of infrastructure is definitely the kind of stuff that we see a lot of these AI capabilities beginning to penetrate. I think the phone, of course, is going to show up there. That's where a lot of your personal data. But I think the Edge, when we think about it, is much broader than the phone. It's your PC, it's all of this Edge infrastructure around us, and a lot of enterprise digital transformation and automation beginning to happen through this Edge infrastructure.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

You just named several examples of where this could interact with, you know, the daily user on a given day. Can you just talk about, when you look at what the Edge may bring in terms of TAM to Intel, what, how do you size that market?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

I think obviously, these numbers are still in flux, right? Everyone's trying to get a grip on how big these numbers could be. If you look at some of the estimates, the Edge market, broadly defined across hardware and software, roughly a $450 billion market, right? That's across the entire stack. Hardware is one chunk of that. I think the best way to think about what AI is doing is the AI-related spend that we can track at the Edge is growing at a 20-plus percentage point, right? So at that, when we look at the split in our Edge compute deployment, the AI-related spend is growing at a much faster pace than the common, normal Edge market as well.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Perfect. So when you, you have a TAM, clearly very large, what products are you going to use to address that TAM? And then I think one interesting AI-related product under your segment is OpenVINO. Can you just comment on that as well?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah. So I think when we think about the Edge, so like AI at the Edge is added to your existing applications, right? So if I think about a drive-thru store or a retail store, everyone wants to enhance their checkout experience with AI capabilities. Everyone wants to augment a human when they're taking orders or customers. So we look at AI as being added on to existing workflows in many of these locations. So when we think about our products, we are taking our core x86 franchise. We have built a great ecosystem on that. There's a large collection of software that already is running on x86, but we are now adding AI acceleration capabilities to our product with Core Ultra next week. So we will have an Intel 4 process node, right?

A CPU on that, which will be our new node, but we will add an NPU and a GPU to that same SoC and the same process on the Edge. So every Xeon that's going to ship for the Edge will have an integrated accelerator, integrated GPU, and an NPU into it. So that's the product form factor at the hardware level. So we'll have Xeon Core Ultra and Atom-based systems that will have integrated AI capabilities. Now, if you're a developer, you look at this and you probably are wondering, how do I take advantage of all of these capabilities in the hardware? Because it's going to be a lot of diversity, depending on how much capacity you want to have with the AI. So what we are doing with OpenVINO, OpenVINO is our software layer, is abstracting the complexity of the hardware. You are a developer.

You can pick an off-the-shelf model that is trained wherever, trained on NVIDIA, trained on anything else. Pick it up from Hugging Face, and OpenVINO will take care of optimizing and running that model across this collection of hardware, at the Edge or on the PC. So OpenVINO is a cross-platform software substrate to be able to run inference, at the Edge and on the PC. So we make developers' life easy, reduce the time to market to be able to infuse AI into your application.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Very helpful. So we've talked the Edge a little bit. I kind of want to pivot more to the networking side. So NVIDIA's been having a lot of growth in their networking portfolio, clearly, InfiniBand, NVLink, Spectrum- X, and that's all driven by their AI center and their core GPU portfolio, right? So how does Intel compete in the AI networking segment when today, so much of the development is centered around their core GPU products? How do you, how do you counter that?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah. So today, obviously, I think, the AI infrastructure is what is being integrated. That's what's happening in the market. But, we are making headway. So with the IPU, for example, when Google announced its NVIDIA instances, it's actually the networking is running through an Intel IPU. And so if you look at the Google NVIDIA instance, Google Cloud NVIDIA instance, it's actually, IPUs are sitting in there doing a lot of the networking. At OCP, Open Compute Summit, just a couple of months ago, Google announced what it had built with us, co-innovated, co-innovated with us, something called the Falcon transport that's running on Ethernet. It is, it is a transport protocol, that is built for these hyperscale, AI workloads.

So what we are now doing is building on that and going to start delivering Ethernet as an alternative to InfiniBand and NVLink. So using that transport, using the Ethernet layer, and providing an open alternative to do InfiniBand and NVLink, that's where the ecosystem is.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Do you think that you need to see a transition in product offering, as in you need alternatives to the core compute away from an NVIDIA GPU for you to find success? Or do you think that your outreach in the example that you just kind of named is going to get enough penetration to shift most of the market in your direction? Just talk about how you're finding ways to win when clearly, at least today, there's a pretty strong stranglehold on at least the core processing power.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah. So I think with the UEC, right? So this is Ultra Ethernet Consortium. The whole industry is looking to build an Ethernet-based alternative for networking for AI, regardless of what GPU you are connecting it to. And it has been designed so that it can connect to an NVIDIA GPU, an Intel GPU, or an AMD GPU, or even the Hyperscaler's own internal AI accelerator. So we expect that networking, like it always has, will standardize and provide much more alternatives for, for being able to connect to any compute, whether it's GPUs or CPUs. Now, I think the question is where, how quickly does this transition happen? I think Ethernet is one which has a long history of being able to scale, right? In terms of the largest fabrics that are getting built out there in data centers, that's all based on Ethernet.

Ethernet usually has a much larger ecosystem of vendors and technologies at play. I think the consensus in the industry is that we will start seeing the shift to Ethernet-based fabrics starting next year and accelerating with the 800 gig generation in 2025. When 800 gig Ethernet becomes standard in many of these data center deployments. UEC is obviously racing to standardize how we would do it for AI, but I think we are all investing in building Ethernet-based products on this AI platform. Are we taking questions?

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Yeah, go ahead.

Speaker 3

[audio distortion]

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

... You mentioned Falcon RT. I mean, that's a Google, you know, congestion management. You don't have access to that yet, have you?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

It was co-developed with us. It is integrated into the IPU hardware.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

So is it open source yet? I thought Google didn't want to open source that.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

It is infinitely open source.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

It's going to be, but it's not yet.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Correct. So they open the specs and OCP, and then they'll be open to implementation.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

It's not an Intel exclusive?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

No, no. So we will support it, but anyone can, anyone can implement that hardware spec. I think right now, the IPU is the only hardware that has that spec in it from a networking perspective, and then the software is going to be.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Then you mentioned next year's Internet. Do you expect the more of a scheduled fabric type of deployment or more like what, you know, Google, Amazon, what you guys are kind of looking at in terms of having the congestion management, you know, with the help of the NIC and, and the RT piece?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

More the latter. So we expect that with UEC and with Falcon API, we can actually go towards a more reliable transport and a congestion managed approach rather than a scheduled fabric like InfiniBand.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Zooming back out, can we talk about the networking business and exposure to both the wired and the wireless side? If you were to break down the percentages and just where you see those two businesses headed in the next year?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah. So let me start with wireless, right? And that exposure there is on 5G. And that roughly, again, as I said, was going double digits, a big chunk of our business. So in 5G, we have two pieces. So one is Xeon-based business, so all of the 5G Core and Cloud RAN business is running on standard Xeon. And then we also do custom compute for 5G. So we announced with Ericsson the custom SoC that we are going to build an APM for them, and that's for their next generation base station infrastructure. But last week, two weeks ago, they announced their current generation custom compute for their base station. That is also built on Intel Core.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Mm-hmm.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

It's actually the first external product on Intel EUV node, the Intel 4 node, that Ericsson's current base station infrastructure ownership. So for 5G, for wireless, I actually have two pieces. One is the standard offering, which is Xeon-based. Anyone can buy through a Dell or an HP or a Lenovo and build out their infrastructure. And then there's custom silicon, the likes which are for custom silicon that we built for Ericsson and others, that they build their custom infrastructure on top of. For enterprise, it's all standard, right? So we build Xeon and Xeon D. Xeon D is the Edge variant of Xeon that is optimized for networking workloads, and, our customers build SASE CDN sort of applications, on top of, on top of Xeon.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

So you mentioned Ericsson. Obviously, news out this week about a shift away from suppliers to AT&T, Ericsson mentioned Nokia losing some of that business. Could you size potentially what those opportunities mean for you? And if you can't be specific on the numbers, just talk about what it means to go from a base station processing unit sitting at the bottom of a tower to some sort of virtualized processing. You know, where do you—clearly, you've got exposure in the box at the bottom of a tower, but what is the trade-off between the box and what a data center would be like that's serving those X number of towers in the area?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah. So I think, so what Tom is referring to is the AT&T announcement on Monday that, they are picking Ericsson to be their vendor for their RAN portfolio, and specifically, AT&T is going with Open RAN and Cloud RAN. So Ericsson is a big strategic partner of ours, and as I said, both their traditional RAN as well as their Cloud RAN portfolios is all on it. So all of Ericsson RAN portfolio basically runs on Intel right. And to Tom's specific question, what's happening, Tom, is the box, which used to be built with custom silicon from us, will now get built with standard Xeon silicon, right? And now you are able to run not just the base station software, but potentially other Edge computing software, too. And you can run it in a cloud-native manner because it's standard Xeon.

So the same Kubernetes infrastructure that you're using to manage a cloud, you can now manage a box sitting at the bottom of the tower. So there's tremendous automation and management benefits that AT&T gets because they can bring the same expertise they use to manage their data centers to running at the bottom of the tower as well. So in terms of size, the best way to think about it is we give AT&T flexibility. They can deploy it at the bottom of the tower, they can deploy it at the central office, where it's a collection of Xeons rather than one. And then, of course, we can do the same computation infrastructure in their data center.

All the way from their data center, all the way to the bottom of the tower next to the antenna, it's just Xeon, and it's the same cloud-native infrastructure that's running end-to-end, their entire network, from the 5G Core, all the way to the radio access.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

So there's trade-offs with every technical decision, but when you are virtualizing a network, moving away from the towers, latency becomes an issue. If you don't have processing on-site, can you talk about how you combat the fact that you need to move information over longer distances? Does that help your network in any way?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

So definitely. I think as they aggregate the compute and run standard Xeon for many of these, you do obviously are going to hit latency barriers, right? But many of these cell towers are now connected with fiber, so that latency is becoming less of an issue. And once you aggregate, the benefit is twofold. One is you can run all of that software that used to run at the bottom of the cell tower in one data center. But second, it becomes the foothold for deploying Edge cloud. The same Xeon that's running your radio access network, your base station software, can also be repurposed to run Edge compute workloads. So if AT&T wants to deploy a CDN or a gaming application or a streaming application, it's the same compute platform.

You don't have to build yet another infrastructure, and that's the benefit of a cloud infrastructure, right? So you're not building purpose-built infrastructure for a specific workload, you can reuse it across all of these. So that's our big bet, right? So apart from obviously benefiting from the radio access build out that AT&T will do, it also makes it easy to deploy other applications there that should drive even more growth at the end.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

It sounds like scalability is a big factor, but when you look at scaling, you need market trends to generally be in favor in some instances. So what you've seen in the 5G market is at least a slowing broadly of some of the tower build out. Can you talk about the health of that market? Do you expect there to be a second wind of 5G deployments? I mean, China was a large deployment very early in this cycle, and you really haven't ever seen the U.S. or Europe live up to the expectations that we really originally kind of sought after. So where are we in that cycle? Do we need to see more tower upgrades, or see more of the micro or small cell build outs that we originally thought?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah, I think the China, of course, went through a build-out cycle earlier than everyone else. India is going through it right now. So India has been the fastest growing market on 5G. We have seen that in our business this year. Europe—I think North America built it out earlier, right? So the U.S. built it out in 2020, 2021 time frame, and I think 2024, 2025 is when you'll begin to see the mid-cycle upgrade cycles happening. For example, actually on Monday, that's their mid-cycle upgrade, and 6G will start showing up, let's say, after 2027, 2028 time frame. Europe, I think, has been more challenging. I, as obviously Europe itself went through a crisis and the fierce competition that exists in every country in Europe, like the U.S., where there are basically three operators.

I think every nation in Europe probably has four or five more operators, right? So it's a tiny market, very competitive market. So the operators there have been, have been going through a more challenging time. I think we expect Europe also to go through an upgrade cycle starting next year. Vodafone just announced it a few months ago that they will be opening it up for an upgrade, next year. So I expect 20—end of 2024 is when you'll begin to see growth back coming into these markets.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

So we've talked more on the telco side. Can we just pivot briefly into the enterprise side?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Again, another market that's going through a correction here. Can you talk about expectations for when that recovery begins? And then more specifically, what products you're bringing to that market?

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah. So I think, the Edge markets, as I was saying earlier, are going through the enterprise markets, are going through a correction later than the PC and the server market. And we are seeing that in industrial, we are seeing that in healthcare. However, last quarter, we did see sequential growth in the Edge market, right? So quarter-over-quarter, we grew around 6%, in those segments. And so we are beginning to see green shoots like Pat and Bill mentioned in the earnings call. Similar to the PC market went through a historic inventory correction in the first half of this year, and we began to see improvement in Q3.

We expect both the Edge as well as PC markets to be in line with the expectations that we have for Q4, as the quarter progresses. So I think we're beginning to – and they, they will both benefit from the proliferation of AI and the launch of Core Ultra next week. So I think the best way to sum it up is, we went through a historic inventory correction earlier in the first half of this year. The Edge is still going through it, but it wants to show quarter-over-quarter growth. We have seen improvement already in Q3 on PC, like Pat talked about in the earnings call, and I think we expect Q4 to be in line with the expectations we set for this quarter.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

I just wanted, we have about a minute left here. I was curious if anyone had any questions in the audience who didn't get a chance before. All right, I've got one more here. So, when you look out at next year, clearly, your three verticals are in different phases of recovery. Some still kind of turning downwards, others, as you just talked about, kind of inflecting a little bit off the bottom. What one area are you most excited about? You can attack one where you see the most growth or two, where you think you're most exciting from a technological perspective into the next couple of years.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Yeah, I think the Edge AI inflection point is the big one. So you—I missed answering one of the questions, one of the parts of the question from the last one, which is what products? And I think the product that I'm excited about is Core Ultra. It is a PC product, but it is also an Edge product. So we take the same processor, but modify it for Edge applications. It has an integrated GPU, integrated NPU, so it really brings AI to the Edge, to your PC.

What we are seeing is a lot of excitement around instead of having to buy and deploy a separate discrete GPU or very expensive power NPU, I can actually buy this system that has all of the AI capability I need packed in and be able to deploy many of these models and for inference on the Edge. So I think a lot of excitement in terms of new upgrades that are happening. I get asked the question: How do I see my business getting impacted by Edge AI? I think it's a TAM expansion. Many of these things that are either going to augment humans or eventually potentially replace some of the tasks I or humans do, that essentially is a TAM expansion for our business at the Edge, right?

So I think that's really where the growth is, going to come from. The second one, which will take a bit longer to play out than just next year, is gonna be the Ethernet replacement for InfiniBand, right? That's gonna be a big one. Those are decisions obviously, for big deployment that happens once every two years. So I think for 2024 and 2025, we do expect that story to start picking up steam, where it becomes the AI fabric substrate, for networking, in proprietary technologies, like we have today.

Tom O'Malley
Managing Director and Equity Research Analyst, Barclays

Very exciting times. Thank you so much for joining us, Sachin, and have a great rest of the day.

Sachin Katti
SVP and Chief Technology and AI Officer, Intel

Thank you.

Powered by