Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.33
+0.79 (0.96%)
Apr 27, 2026, 12:27 PM EDT - Market open
← View all transcripts

Bank of America 2023 Global Technology Conference

Jun 7, 2023

Vivek Arya
Managing Director, Bank of America

Good afternoon. Welcome to this afternoon session. I'm Vivek Arya from the Bank of America Semiconductor Equipment team. Really delighted and honored to have Sandra Rivera, Executive Vice President of Intel's Data Center and Artificial Intelligence team, joining us this afternoon. We will start with an exciting disclosure statement from me that Intel asked me to read. Then I will just turn it over to Sandra for some opening comments, and then we'll get into a Q&A. Please feel free to raise your hands if you have anything that you would like to bring up. From an Intel disclosure perspective, please note that today's discussion may contain forward-looking statements that are subject to various risks and uncertainties, may reference non-GAAP financial measures.

Please refer to Intel's most recent earnings release and annual report on Form 10-K and other filings with the SEC. For more information on the risk factors that could cause actual results to differ materially and additional information on Intel's non-GAAP financial measures, including reconciliations where appropriate, to the corresponding GAAP financial measures. After that exciting introduction, Sandra, maybe over to you. Really appreciate you joining us this afternoon.

Sandra Rivera
EVP, Intel

Thank you, Vivek. Thank you for having me here. Let me just start out, maybe some opening remarks, and then we'll get into the Q&A as you. Oh, there, we did this one already. I just wanted to maybe give a broad backdrop of the market opportunity that we have in front of us. First is that we are operating in a large and growing TAM. The amount of data that continues to be generated in the world that needs to be processed, moved, stored, and acted upon, just continues to grow. The amount of computing capability that we need to deliver to the world, continues to grow. It's wonderful to know that we have a large market opportunity, that we are participating in.

Of course, and we'll get to AI. AI just, again, is a rising tide that increases the amount of compute required in the world. We are absolutely focused on, frankly, I'll just say, doing fewer things better. When we have looked at our CPU, portfolio, our GPU portfolio, and the overall complement of heterogeneous architectures that we have, we have been very focused on execution, execution, and ensuring that we make and meet customer commitments. This is an acknowledgment, clearly, that in recent years, we stumbled a bit, had some setbacks. We have recommitted, maniacally to ensuring that we make and meet customer commitments.

Our roadmap, certainly on the CPU side, is on track and delivering all key milestones and feel really good about all of our leading indicators moving forward. From an overall portfolio perspective, we have within the Datacenter and AI Group, all of the data center technologies required to address what is a very complex set of workloads. Clearly, not just AI, but networking, HPC, storage, high-performance computing. We have the complement of CPUs and GPUs and AI accelerators, FPGAs and IPU capability in that portfolio, all brought together with software as that homogenizing layer. We have the hardware and the software required to meet customer demands, and we feel, again, really good about the market opportunity and the expansiveness of the TAM.

From an AI perspective, specifically, that idea that we unlock value through the software and through the rich stack of software tools and the tool chain and the developer enabling that we've committed to for decades, where we've led the market in many ways through technology transitions and leading-edge technology capability. That full complement of capabilities being brought to the AI opportunity is something that, again, we see as market expansive and a tailwind in terms of the market opportunity. Just specifically looking at what I'll call the AI continuum, you know, our focus is bringing AI to the masses. It's making the affordability and the economics work for everybody, and we certainly see a lot of interest in the very largest language models, of course, the GPT-3, GPT-4 types of language models.

Clearly, that is a lot of excitement and very real requirements in the market to be able to address that type of capability. AI is this complex and vast, set of workloads, and we do see that you need to have capabilities in the cloud, in the edge, in, of course, the enterprise, and then all the way out to the client devices. The ability for us to continue to integrate AI capabilities in all of our computing platforms is something that we think is highly differentiated and highly valued by customers. When you look at that continuum, cloud, enterprise, edge, to client, we have heterogeneous architectures to address that market opportunity. Starting, of course, with the ubiquity of our CPUs, both in the data center with Xeon, but also out to the client.

In the data center, we're on our fourth generation of integrated AI capability. We started years ago with AVX2, then enhanced that to AVX-512, then moved to VNNI with our third-gen Xeon, and then most recently with the advanced matrix extensions, AMX integrated accelerator into fourth-gen Xeon. On the client side, we have the integrated VPU that provides really a market-leading capability and we believe will be the most pervasively deployed computing, PC computing device with integrated acceleration, once that product ramps later this year. From an overall perspective, after you get to, you know, past the front end of data management, data processing, data cleaning, you move on to, of course, the training phase.

In that training phase, you have small to medium size models that the CPU is actually well-suited to address. I say small to medium is 10 billion parameters and less. You have the CPU on the front end doing all of that data prep work, and Xeon really does a great job there. Even the market leader in GPUs has selected, you know, 4th Gen Xeon as their platform for the CPU head node. When you get to the model training, some of those small to medium-sized models, and typically the ones you see in the enterprise, the CPU does actually a very good job there.

For the larger models, you need a much more parallelized architecture, the domain of GPUs and AI accelerators, and this is where we have our own GPUs, GPU Max, as well as Gaudi accelerators, to address that large language model capability for model training, as well as inference. That, I would say, is the 100 billion-plus size of parameters. On the edge, this is where, you know, again, in the enterprise and deployment of those models, you do fine-tuning, retraining, all of that distributed inference. Large footprint for us to address with CPUs, but increasingly we see an opportunity there with GPUs. We have GPU Flex there.

That's a smaller footprint, lower power, edge inference device that does media processing, that does cloud gaming, other types of VDI, types of workloads on-prem, where our GPU Flex is well suited. Then out to the client devices, of course, we have not only the integrated AI with our CPUs, but also discrete graphics with the Arc brand. Complete portfolio from the cloud to the enterprise edge and out to the client, and all of that brought together by the software. The software really is the unlock of the hardware, and we can get into a lot more discussion in terms of the richness of the software stack and how we unlock value with developers and deliver fast time to productivity through the software stack.

Vivek Arya
Managing Director, Bank of America

Excellent. Thank you, Sandra. Very comprehensive. Just maybe one kind of near-term question, and then we'll go through the industry structure. Near term, Intel, I think recently said that you expect Q2 to be at the high end of guidance, and I think data center is part of that. Could you give us some more color, like what in the data center is doing better than you thought? And just kind of a general state of the union on what you see from a demand environment perspective.

Sandra Rivera
EVP, Intel

The year is shaping up to be pretty much as we expected coming into it. We launched 4th Gen Xeon at the beginning of the year in January. Actually, our customers helped us launch that platform, where we see clear leadership vis-a-vis competition is in those high-growth areas of AI, networking, HPC, security, storage applications, lots of opportunity there. When Dave talked about the health of the business, it was both in the server data center side, as well as the client side, and just being more optimistic that we could, you know, be in that top half of the range that he gave. The linearity of the business is also healthy from a cash flow perspective and what we're seeing in terms of customer demand.

We feel good about the way that the first half of the year has panned out, and have been cautiously optimistic about the second half because we clearly over-index on enterprise workloads, where we have a very strong market second share position, and on China, where again, we have a strong brand, strong market second share position. As the customers are perhaps looking at their second half and balancing what they feel comfortable committing to in terms of both on-prem deployments and on-prem could be cloud infrastructure as well. I mean, typically, you have clouds on-prem as well as in public clouds and what they're moving to the public cloud. You know, we have reasons to be still cautiously optimistic about that second half.

Right now, we've been working through a lot of the inventory burn issues, particularly in the enterprise side. You know, we think that we start to see a little bit more movement in the second half, probably more in Q4. First half has played out the way we expected it and you know, feel good about our position there. Again, server and enterprise and client. Second half, seeing some reasons to think that things might come in a bit healthier, but still being cautious just because we've had some inventory burn issues to get through.

You know, I think that some customers are still being a little bit tentative in trying to decide where they they make their big CapEx investments.

Vivek Arya
Managing Director, Bank of America

Got it. The near-term excitement, is it kind of cloud? Is it enterprise? Is it China? Like, was there one factor that stood out to give you a little more optimism about Q2 data center?

Sandra Rivera
EVP, Intel

Well, I'm not saying anything more than what Dave said. The performance of the business in the first half was as we expected it. We have a very strong position in enterprise, but they were burning through more inventory, so that was a bit depressed in the first half. What we had planned for our position in the hyperscalers is actually quite strong. We continue to see our CSP customers deploying with 4th Gen Xeon. In fact, just last week or the week before, Google launched their C3 Sapphire Rapids 4th Gen Xeon instance with our IPU. That was a product that we co-developed, co-designed with them. We will continue to see the hyperscalers roll out 4th Gen Xeon-based instances throughout the year.

Actually, the pipeline for 4th Gen Xeon is quite healthy. We have over 600 designs. We have 400 that are already shipping. Every single, large cloud service provider in the world is going to be deploying on 4th Gen Xeon. We'll see that continue to play out throughout the year.

Vivek Arya
Managing Director, Bank of America

Got it. Now, kind of the big picture question, Sandra, is that, you know, there seems to be this kind of zero-sum game between the CPU and, you know, pick your choice of accelerator. Of course, Intel has many accelerator options also. Is it as black and white as that? Like, Sorry, does the CPU just have to you know, lose and disappear and, you know, go away?

Sandra Rivera
EVP, Intel

Yeah, it's a great question. We don't actually see it's an either/or. We see AI as more of a rising tide than a balloon squeeze. I think in the near term, certainly the growth rate for GPUs is outpacing the growth rate for CPUs, and we expect to see that throughout 2023. As I was describing earlier, you know, when you look at who are the purveyors of the very largest language models, and who can afford $10 million, you know, over $100 million to train a unique large language model, there's not that many, you know, companies in the world that can actually afford to do that or want to do that. We see so much of the growth opportunity happening when you actually get to deployments.

Typically, you know, enterprises want to train on their own data. They want to do that in their own secure data perimeter. They want to contextualize the queries around, again, their datasets, their acronyms, their unique, you know, domain-specific types of capabilities. This is where an example recently with Boston Consulting Group, we were able to work with them to train on, you know, certainly large language models, open source models, the BLOOMZ 176 billion parameter model using Gaudi. When we went to deploy on-prem, they had, you know, 50 years' worth of data, they wanted to do that in their contextualized environment, with their own dataset in their secure perimeter, we were able to do that in less than 12 weeks.

They just see so much value in that time to productivity, and the security of, again, having their dataset trained in a way that isn't, you know, putting things up in a public, in a public model or in a public forum. I think there's just so many examples like that, Vivek, where the, you know, the AI tailwind, I think, really will be market expansive for everybody, and it's a big, it's a big market. We're in the early innings, right? There is so much opportunity out there, and we want to be the company that customers trust for their broad-scale deployments, particularly as we move into that inference and fine-tuning and retraining stage of where we are with that continuum.

Vivek Arya
Managing Director, Bank of America

Got it. How's the outlook on Sapphire Rapids? I think you mentioned that it's being really targeted at the fastest growth, right, workloads. Obviously, AI is one of them. If we kind of fast-forward and, you know, we are having this fireside a year from now, how do you think Intel would have done in the AI CPU side versus your competition with Sapphire Rapids?

Sandra Rivera
EVP, Intel

We are holding our own. We feel really good about where we're performing with 4th Gen Xeon. We had projected that we would be shipping about that million-unit mark by the middle of this year. We're still on track for that. While we over-index on those high-growth workloads in terms of performance, leadership and power efficiency and per TCO vis-a-vis competition, we still address a broad range of workloads beyond just those highest growth ones with a highly performing, highly versatile CPU platform. A lot of that capability really comes from the software optimizations.

There's so much that we do investing in our software resources and engaging directly with customers and doing that optimization work that gets you significant improvement, not just from an overall performance perspective, but a performance per TCO perspective. We had one example recently when a customer is doing a database compression, Microsoft SQL Server 2022 database compression, with integrated QuickAssist Technology, we were able to demonstrate that you can go from having 50 servers running that workload to 29 servers because it's just so much more efficient. That was on a direct comparison of one of our 32-core 4th Gen Xeons to the competition's latest 4th Gen, you know, 30-core, 32-core system.

We do see that the approach that we've taken with Sapphire Rapids, with 4th Gen Xeon, integrating those accelerators, not just for AI, but for networking capabilities, is bringing real value to customers. And a year from now, I think that we'll have demonstrated that, you know, 4th Gen Xeon is a very competitive product, that the platform differentiation we've had with the health and the quality of being able to drive those memory transitions, DDR4 to DDR5, the interconnect, PCIe Gen 4, PCIe Gen 5, the CXL, you know, capabilities that, you know, having the quality of the platform and the deployability from day one when we launched 4th Gen Xeon really will have proven to be a big differentiator.

In terms of where I'll be sitting a year from now, I will have delivered 5th Gen Xeon at the end of this year on time, on spec, and customers are pretty excited about that drop in performance boost they get to the existing platform, in addition to or expanding from 4th Gen Xeon. I will also have delivered Sierra Forest, our efficiency core product, and I guess 6th Gen Xeon, Granite Rapids, will be shortly after. I will be in a much, much better position in terms of real, you know, strong leadership across the breadth of the portfolio, and customers are leaning into that. Today, they have samples.

They're doing the volume validation with us on not just 5th Gen Xeon, but, the, you know, Sierra Forest and Granite Rapids for next year. The health of the product is great, and so all the leading indicators are really, really strong at the back end. I'm looking forward to a year from now because I know I'll be even stronger a year from now than I am today.

Vivek Arya
Managing Director, Bank of America

Same. Sandra, what do you think is that piece where Intel is kind of putting the most focus? Is it, maybe the answer is all of the above. Like, you know, process? Is it architecture? Is it features? Is it just that, you know, once you're kind of knocked off as the incumbent, it just takes some time to get back? Which of those things do you think Intel is working on the hardest, and what does it need to do so that a year from now, right, or whatever, two years from now, that you will be in a place where we are not seeing those kind of market share changes, right, against your CPU competitor?

Sandra Rivera
EVP, Intel

Well, process technology, for sure, is a huge focus, and if we look at what constitutes product leadership, it is a combination of process technology as well as architecture and engineering. On the process technology side, we are absolutely executing to Pat's vision of 5 nodes in 4 years. If you look at, you know, Intel 7, Intel 4, Intel 3, Intel 20A, and Intel 18A, those are the 5 nodes. Intel 7 is done, right? That's what was delivered with 4th Gen Xeon. Intel 4 is being delivered now with Meteor Lake, the a high-volume client product. The sister node to Intel 4 is Intel 3, which we are delivering next year with both Sierra Forest and Granite Rapids, and so the health of 4 and 3 is really, really good.

Intel 3 is really just a more optimized, more dense library. It's higher performing for data center and server implementations, but it's very similar to Intel 4, which means that the process is healthy and we feel really good about, again, all the power-ons happening now and all the volume validation going on with customers. By, you know, next year, three of those nodes, you know, check, check. Then when we get to 2025, really, next year, we'll have in 2024, Intel 20A with a client product, again, being the pipe cleaner for that process.

Intel 18A is the sister node, and that's where we've landed Clearwater Forest, which is the follow-on to our Sierra Forest E-core product that we're delivering in the first half of next year. With Intel 20A, we're gonna get RibbonFET technology gate-all-around on the transistor. With Intel 18A, we get the backside power delivery to the transistor, both of those innovations coming together in Intel 18A is really exciting for us. Process is hugely important. Both Intel 3 and Intel 18A are the foundry nodes.

Vivek Arya
Managing Director, Bank of America

Mm-hmm.

Sandra Rivera
EVP, Intel

We, of course, are gonna drive a lot of volume on both Intel 3 and Intel 18A with our own products, but the Intel foundry goal is to ensure that, you know, we have a volume customer on Intel 3. We are working hard to close a volume customer on Intel 18A, and so that's crucial. The second, you know, big area of innovation and product leadership comes from architecture and engineering. For us, I think that, you know, we have to own the fact that we lost a bit of our engineering discipline, over, you know, recent years.

You know, in the last certainly two years since I've been leading this organization, our focus has been execution, execution, rationalizing the roadmap, which were painful decisions and trade-offs that we made, but we wanted to go to our customers and say, "When we make a commitment, we're gonna meet a commitment." I, again, I'm happy to say that we are so much healthier today. All the leading indicators look great, so that focus on our priorities, doing fewer things better, and executing for our customers and coming to market with a predictable cadence of high-quality products is what our customers had counted on us for decades, and they can count on us again in terms of product leadership, process leadership, and being on time when we, when we say we're going to be.

Vivek Arya
Managing Director, Bank of America

Got it. You know, one new and interesting development is, you know, kind of this emergence of these combination CPU, GPU platforms, right? Whether it's Grace Hopper from NVIDIA or MI300 from AMD. How do you look at that? Like, is that a big deal? Is it a small deal? Do you think it's gonna cannibalize the current market structure, which is kind of discrete CPUs and discrete accelerators? Or do you think it's kind of a, you know, kind of a niche thing? It handles certain workload, but it's not really going to be a big deal over time.

Sandra Rivera
EVP, Intel

It's a bit of an unknown right now. I mean, clearly, everyone's delivering to customers, products that, from a platform perspective, deliver both CPU and GPU, integrated capability, again, at the platform level. The thing that that gives customers, which they like, is that flexibility in the system architecture and in addressing the workload requirements. That model works very well. Typically, especially the hyperscalers, they don't deploy, you know, a node or a rack. They deploy very large, scale clusters, and they have very sophisticated software that lands the workload on the optimum, header, you know, hardware architecture underneath. That model is really the way that the market consumes compute today, and particularly for AI.

How the co-packaged, you know, approach will play out, honestly, I don't think anybody knows yet. It is predefining a certain ratio that you have to know or think you can project where those workloads are going to land. AI is moving so quickly, and I'm not sure that anybody can truly say, you know, six months from now what it's gonna look like necessarily. But it's something that clearly we're keeping an eye on. We have our own plans as well, in terms of some of the future GPU innovations that we're driving forward. Again, looking at, you know, what does it mean to share memory, to share power? You know, are you really optimizing or sub-optimizing any one of those components?

In the near term, Vivek, I mean, the market is big and wide and growing for discrete CPUs and GPUs, and at the platform level, bringing those together.

Vivek Arya
Managing Director, Bank of America

Got it. Now, in late June, I believe Intel has announced, right today, when you will describe how you have separate reporting for the design, and how does it really change your business on a day-to-day basis? How does it, you know, when you more share with customers? Like, how does it change what you do in the data center side?

Sandra Rivera
EVP, Intel

That, that entire IDM model or IDM 2.0 model is actually quite helpful in the decisions that we're making in our own product execution. Some examples of that are, you know, we do use a lot of hot lots. Typically, as a GM, I don't always think as hard about that as I probably should in terms of the cost of hot lots. Not just the cost, but the disruption to the factory in terms of their utilization and efficiency. We also have a lot of test time that we drive in our products that, again, am I overshooting a bit in terms of the complexity and the content in those test scripts?

You know, clearly another area is just the decision to step a particular piece of silicon, one of our products. If or as we are getting much more transparency on the real cost to do that, not just the cost in terms of, you know, how expensive that is for the organization, but the opportunity cost in underutilization or inefficiencies in the fab. I think it certainly is very helpful to me as the GM of the business and the other GMs at Intel, but also to our process and manufacturing partners, where they need to charge us more for maybe being less predictable and more demanding customers sometimes. We need to probably think through where the optimization points are in terms of our internal costing.

For us, it's actually a very welcome change in how we look at the business. I feel I have way more data to make more informed decisions and better decisions that will play through in the P&L. Similarly, for the manufacturing and the fabrication side of the business, they need to ensure that they have an efficient and compelling value proposition as they are attracting customers to Foundry, which is health of the PDKs and costing that's competitive and defect densities and yields, and all of those factors that really become their set of issues to work through. I don't... You know, I just buy a wafer at a predetermined cost, and then I know what my, like, my costs are if I want to expedite some of that capability.

Vivek Arya
Managing Director, Bank of America

Got it. More transparency. Just the last thing, what is that trade-off between having a lot of accelerator options, you know, that you can customize to many kinds of customers and workloads, right? Versus having the focus on one. You know, Intel has, right, you have the programmable systems business, right? You have the Gaudi accelerator, you know, your CPUs do acceleration, you have the GPU Max, you mentioned. How do you make sure that you have the right resource allocation and, you know, don't have too much fragmentation of where these resources are being allocated?

Sandra Rivera
EVP, Intel

... Yeah, well, I think that it's pretty clear that it is not a one-size-fits-all, right? It's not just AI, but, you know, workloads are so diverse and so expansive that we do need different architectures. You know, scalar architectures and vector architectures and matrix architectures and spatial architectures, and so that full complement of CPU, GPU, AI accelerators, and FPGAs and IPU is, you know, is another scale-out tool that we have in the tool chest. All of these are required to meet the diversities of our customers' workloads. The key for us is having a consistent software stack, and I think that this is the thing that we clearly see, that developer productivity and time to an outcome is the biggest measure of value for customers.

Particularly when you get into large scale-out clusters, it really isn't just the device. You have to think about the networking, the fabric, the system architecture, the platform capability, the cooling technology, in some cases, you know, memory pooling, you know, how you're addressing those capabilities. It is not it is not like a one-size-fits-all approach. We have to invest in innovations across that portfolio. Our focus is really on addressing customer workloads, and that comes in through the software and a lot of that optimization work, frankly, does happen in software. Process technology always gets you a performance boost, you know, anywhere from, you know, 15%, 20%, 25%, 30%. Architecture and design gets you another performance boost.

Another, you know, 20%, 25%, 30%. Software is the multiplier.

Vivek Arya
Managing Director, Bank of America

Right.

Sandra Rivera
EVP, Intel

You can get, you know, 5x, 10x, 20x performance boost through software. We really do believe you need that rich set of underlying heterogeneous architectures, it's the software that is the most critical, and that's where the biggest area of investment is gonna be for us going forward.

Vivek Arya
Managing Director, Bank of America

Terrific. Great. Thank you, Sandra.

Sandra Rivera
EVP, Intel

Thank you, Vivek.

Vivek Arya
Managing Director, Bank of America

Really appreciate your time.

Sandra Rivera
EVP, Intel

Yep, good to see you.

Vivek Arya
Managing Director, Bank of America

Appreciate you. Thanks. Thank you, all.

Powered by