Before we begin, please note that today's discussion contains forward-looking statements on the environment as we currently see it. It does involve risks and uncertainties. Our filings with the SEC, including our most recent earnings press release and 10-K, provide more information on the specific risk factors that could cause actual results to differ materially. Good morning. Thank you for joining us today for our second in a series of investor webinars. The purpose of our webinars is to give our outside owners the opportunity to engage more closely with our leadership team and to focus on specific topics that are critical to better understanding our strategy and ultimately the value we are striving to create for all of our stakeholders. I'm John Pitzer, Corporate Vice President and head of investor relations.
In January, we hosted our first webinar on the PC business, highlighting our view on the overall market and the strategies we're pursuing to capture value. Today, we're going to take a closer look at our Data Center and AI business with the help of three of our key executives joining me on the webinar. Sandra Rivera, Intel Executive Vice President and General Manager of our Data Center and AI Group, Greg Lavender, Senior Vice President, CTO, and General Manager of our Software and Advanced Technology Group, and Lisa Spelman, Corporate Vice President and General Manager of our Xeon products. Today's discussion will focus on three key issues which have been top of mind for many of our owners in recent engagements.
First, we see the data center CPU market and the x86 market in particular, as a solid growth market as the demand for compute, and especially compute cores, continues to accelerate. Second, we continue to make very strong progress on our process and product roadmap, and we see a clear path to regaining leadership and outgrowing the ma rket. Third, we are well-positioned to capitalize on the accelerating growth in AI across our portfolio of CPUs, accelerators, and software as we aim not only to proliferate AI as a workload, but also to truly democratize it with our open ecosystem strategy. After our prepared comments, Sandra will be available to answer your questions. As is our normal practice, we would ask that you queue for a question, and when it is your turn to ask one question and a brief follow-up.
We would ask that any questions today are focused on the topics at hand and our DCAI strategy. We will be more than happy to address any questions on near-term outlook and financials on our quarterly conference call when we report Q one. With that, let me turn things over to Sandra.
Thank you, John. Hello, I'm Sandra Rivera. I lead the Data Center and AI Group at Intel. I'm here today to provide an update on our data center business. Over the course of our discussion, I will walk you through Intel's view of the market and how we're positioning ourselves for long-term growth. I will also show you the tangible progress we're making on our data center roadmap. You can see how we're executing to bring our leadership solutions to market. We will talk about the massive AI opportunity in front of us and our strategy to truly democratize and capture a greater share of this rapidly growing market. The long-term demand for compute continues to accelerate. We see a market opportunity of more than $110 billion for Intel's data center and AI logic silicon business by 2027.
High-growth workloads such as AI, networking, security, analytics, and HPC, and our customers' demands for both mainstream processors and discrete accelerators to run these workloads continue to expand the market. Within this large and growing TAM, we believe our hardware and software solutions are well-positioned to grow and win in the market. Whether that's through our Xeon processors, which today are the foundation for mainstream compute, or through executing on our portfolio of heterogeneous architectures that allow us to compete and win in today's high-growth workloads. While the last few years have been challenging, we continue to hold a strong position in the market because of the approach we take with our data center solutions. Combined with the strength of our roadmap, we intend to compete and win in data center applications. When we talk about compute demand, we often look at the TAM through the lens of CPU units.
However, counting sockets does not fully reflect how silicon innovations deliver value to the market. Today, innovations are delivered in several ways, including increased CPU core density, the use of accelerators built into the silicon, and the use of discrete accelerators. As new use cases continue to emerge and the AI market grows, so does the demand for mainstream compute. When looking at compute demand through the lens of the number of CPU cores delivered to the market, we have seen a mid-20s CAGR over the last five years and expect that pace of growth to continue. We expect x86 CPU revenue to follow core trends more closely than sockets in the coming years, and we are increasing the number of cores in our Xeon roadmap at a faster rate than in the past.
We expect to deliver the increased performance and processor density the market demands while monetizing that incremental customer value through higher ASPs. As the mainstream compute market continues to grow, we are well-positioned to capture a larger share of this market with our Xeon roadmap. Today, I will show how our P-core pruructs will address traditional x86 demand and how our new E-core product line will compete directly with competitive architectures focused on high density throughput and performance per watt. Complementing our silicon, we serve our customers' diverse needs through a broad range of software and fleet-level services that are built to scale customer deployments from the cloud through the network and out to the intelligent edge. Our combination of silicon, software, and fleet-level solutions are highly competitive in delivering total cost of ownership advantages in both mainstream applications and high-growth workloads.
With our latest 4th-gen Xeon processor, customers are seeing gen-on-gen TCO improvements ranging from 52%-66%. Today, Intel is the volume leader in the IPU market with our FPGAs, Xeon D, and Ethernet components being deployed in six out of the top eight hyperscalers. These customers are using our solutions to manage their infrastructure more efficiently. We also make it easier for customers to deploy our solutions through our extensive ecosystem of partners, simplified software tools, and optimized compilers. The investments we've made in higher levels of the software stack make it easier for developers to use the AI frameworks they're familiar with, such as PyTorch and TensorFlow. With oneAPI, we're giving developers openness and choice with the hardware architectures they use by delivering upstreamed optimized libraries that are easy to program and that scale on our hardware, using one code base across multiple architectures.
Greg Lavender will share more details on the work we're doing to make it easier for developers to program on our heterogeneous portfolio. We broadly deliver our heterogeneous silicon solutions, the workforce for mainstream compute wit hin data centers continues to be the CPU. Most general purpose workloads today use processors that are compute optimized and that deliver the highest performance per core. We are highly competitive in this segment of the market with our Xeon scalable processors. The market for mainstream compute grows, the requirements for data center infrastructure are expanding. Today's born-in-the-cloud businesses and the growth of microservices require a different category of processors that deliver higher core density at lower power. To better serve today's evolving compute requirements, we expanded our Xeon roadmap to address customers' broad application needs.
We announced this expanded roadmap last year, and we're making excellent progress to bring these solutions to market on time and at the highest quality. Our Xeon roadmap now has two swim lanes that are optimized to cover the broadest range of mainstream compute. Our P-core Xeon processors are optimized to reduce TCO in both compute optimized and general purpose compute workloads. P-core Xeon processors are represented by our Rapids swim lane. Our E-core Xeon processors target today's growing market in ultra-high-density compute, primarily occurring within hyperscalers, many of which are using or developing Arm-based homegrown solutions. Our new class of Xeon is purpose-built to deliver best-in-class performance per watt with all the ecosystem advantages of the x86 instruction set. This two-core strategy will allow us to meet our customers' needs across the broadest range of data center workloads, market segments, and deployment models.
It also allows us to reduce development costs, reduce risk, and shorten development time by reusing many architectural features, including memory controllers and I/O chiplets and our rich software tool chain. Our 4th-gen Xeon Scalable processor, formerly code-named Sapphire Rapids, launched earlier this year and is our latest P-core Xeon based on Intel 7 process technology. 4th-gen Xeon is the highest quality data center processor we've launched in many generations, and we're aggressively ramping to customers. We remain on track to ship 1 million units by mid-year. We're seeing broad adoption across the enterprise, cloud service providers, comms service providers, OEMs, and ODMs. We have over 200 designs shipping currently from all major OEMs and ODMs. In addition, the top 10 global cloud service providers are deploying services now and throughout 2023.
The strong demand for 4th-gen Xeon is due to the health of the platform as well as the processor's built-in accelerators, which deliver a gen-on-gen average performance per watt improvement of up to 2.9x in a broad range of workloads. One of the most significant features in our latest processors is the new AMX AI accelerator engine, which delivers up to a 10 times increase in AI inference and training performance versus the previous generation. This accelerator broadens the range of AI workloads that can run on Xeon without requiring additional discrete accelerators. To demonstrate the real-world value that customers are seeing with our latest Xeon processors, let's go to our lab to hear from Lisa Spelman, General Manager of our Xeon products. Hi, Lisa. What do you have for us?
Hi, Sandra. It's great to be able to join you today. Customers are looking for real-world application performance for their most demanding workloads. We collaborate closely with our customers and partners to understand their challenges and the shifts in their business. We know AI is pervasive and growing at an incredible rate, embedded in the vast majority of use cases and workloads. For several generations, Intel has invested in accelerating AI on Xeon, and this acceleration continues to play a central role in addressing that demand. Sandra just mentioned that 4th gen Xeon with built-in acceleration of Intel AMX delivers a 10x increase in AI performance versus our previous generation. Many of you are probably wondering, though, how does that compare to competition? Well, I have a 48-core 4th gen Intel Xeon and a 48-core 4th gen AMD EPYC.
I'm going to launch several AI imaging and language workloads on both systems. You can see that our 4th Gen Xeon with the built-in acceleration of Intel AMX is delivering an average performance gain of 4x over our competition's latest on this broad set of deep learning workloads. We estimate that over 80% of server processors sold today are 48 cores or less, and these results demonstrate how the vast majority of our deployments will perform. Our competition would need to well more than 2x the cores to match this performance. It really pays off to deliver performance with efficiency. Back to you, Sandra.
Thanks, Lisa. That's a great demonstration of the built-in AI performance we're able to deliver in our latest Xeon. Looking toward the future, we continue to make great progress as we execute on our Xeon roadmap. Later this year, we expect to deliver our 5th Gen Xeon Scalable processor, Emerald Rapids. Silicon is coming out of our factories at very high quality. Volume validation is well underway, and we're sampling the products to customers today. Our 5th Gen Xeon features an increase in processor cores and is pin compatible with our 4th Gen Xeon, providing customers an easy migration path to take advantage of the processor's built-in workload accelerators, enhanced security features, and increased performance within the same power envelope. Customers who upgrade to 5th Gen Xeon require minimal validation, speeding up their time to deployment. I look forward to sharing more about this product throughout the year.
Following 5th Gen Xeon will be Granite Rapids and Sierra Forest. These two processors will be delivered on our next generation high-performing platform, which shares the same base architecture and gives customers portability between the two products. The health of these two programs is excellent, with the power on process exceeding our expectations. I'm pleased to tell you today that we have narrowed our delivery window, and we'll ship Sierra Forest to customers in the first half of 2024, with Granite Rapids following shortly after. Granite Rapids delivers several improvements compared to the previous generation, including increased core counts, improved performance per watt, and faster memory and IO innovations. The first units of Granite Rapids silicon coming out of our factories is healthy, and with the overall program in excellent shape as we continue to hit all major engineering milestones.
Lisa, would you like to share some of the leadership innovations we're delivering with Granite Rapids?
Absolutely, Sandra. We have exciting news on our next gen platform memory subsystem. We are building the fastest memory interface in the world for Granite Rapids. Intel invented and led the ecosystem in developing a new type of DIMM called Multiplexer Combined Rank that lets us achieve speeds of 8,000 mega transfers per second based on DDR5. Let me show you. You can see that we have an MCR DIMM at 8,000 in our Granite Rapids platform. In order to demonstrate the health of the platform and our memory subsystem, I'm going to saturate the platform with memory reads and writes. You can see the Granite Rapids is stable and saturating a healthy memory subsystem. MCR at 8,000 is well ahead of the rest of the industry.
This boost in bandwidth is critical for feeding the fast-growing core counts of modern CPUs and ensuring that your cores can be efficiently utilized. In fact, memory bandwidth is a first order performance limiter on many workloads, including those in the AI and HPC space. The MCR DIMM innovation achieves an incredible 83% peak bandwidth increase over current gen server memory technology and greater than 1.5 terabytes of memory bandwidth capability in a 2-socket system. Back to you, Sandra.
Thank you, Lisa. That's great progress by the team. We're sampling Granite Rapids to customers today, and the feedback we've been getting is very encouraging. Many of our leading customers have been highly impressed with the health of the silicon and the early results they're seeing. We remain on track to launch Granite Rapids in 2024, shortly after our E-core based Sierra Forest processors come to market. The progress we're making on our Sierra Forest program is also exceeding our expectations. Sierra Forest will be our first Xeon processor that leverages our efficient cores and will be our lead vehicle for the Intel 3 process. These processors will feature 144 E-cores per socket and will be highly competitive with high core count data center processors and Arm-based in-house developed solutions.
Our E-core Xeons will deliver best in class performance per watt with all the ecosystem advantages of the x86 instruction set. The silicon health of Sierra Forest is excellent. We started the power on process earlier this quarter, and we were able to boot multiple operating systems on the silicon in under a day. I'll invite Lisa back now to show us all firsthand how healthy our first E-core Xeon is. Lisa, can you show us a sneak peek into the power on process?
Absolutely. Let's do it. Sierra Forest is the highest core count product we've ever powered on. Let me show you Sierra Forest on one of the multiple OS's that we have booted. You can see Sierra Forest here booted on Linux. I have just launched an application to stress the cores. You mentioned 144 cores. Here you can see them all running a workload and healthy. Sierra Forest has gone from the fab to powered on and booted to full operating system level in less than 18 hours, demonstrating the high quality of our execution and giving us greater confidence in our progress on this product.
The customer response to Sierra Forest has been enthusiastic, including our top cloud service providers. Our customers are eager to see the capabilities this high density performance per watt optimized Xeon will provide in addition to being able to leverage the well-established x86 ecosystem. It was great to be here today. Thank you
Thank you, Lisa. The progress our engineering teams are making with our first E-core Xeon program is outstanding. Our first customer for Sierra Forest has already received silicon, and we expect to be sampling to additional customers in the coming months. Announcing here for the first time, we will continue to execute on our E-core roadmap swim lane with the follow-on to Sierra Forest, a product we've codenamed Clearwater Forest, which will come to market in 2025. Clearwater Forest will be manufactured on Intel 18A, the node where we plan to achieve process leadership. It's the culmination of our 5 nodes in 4 years strategy. With our Xeon roadmap, we are once again demonstrating execution excellence. Our fourth Gen Xeon is high quality and ramping strong, and we're getting back to schedule predictability with fifth Gen Xeon, which will ship in Q4 of this year.
Sierra Forest will be delivered in the first half of 2024. Granite Rapids will follow shortly after. These two programs will deliver a significant improvement in performance per CPU and performance per watt. We're firmly on pace to deliver product leadership with Clearwater Forest in 2025. As the strength of our roadmap increases, we feel good about our ability to regain market share. In addition to Xeon, we continue to make progress executing on the rest of our silicon roadmap. That includes GPUs, discrete AI accelerators, IPUs, and FPGAs. Today, our Data Center Flex GPUs deliver approximately 30% better performance in media and AI inference compared with competition. Our Max series GPUs are delivering up to 50% better performance for physics applications versus competitive products. We achieve this performance using oneAPI, which provides code portability across different hardware architectures.
We recently streamlined our GPU roadmap. Today, our GPU teams are operating with an extreme focus on quality, engineering discipline, and predictable execution. Essentially, we're using the same playbook we ran for our Xeon roadmap. Listen to customer feedback and focus, prioritize, and execute. We'll continue to update you on our GPU offerings, which are critical to addressing the accelerated computing market, particularly with the inflection point of large language models that we expect to drive continued demand for accelerated compute. For today's large language models, our Gaudi accelerators expand our AI offerings and are delivering tremendous performance gains for deep learning training use cases. We're in the market today with both Gaudi One and Gaudi 2. Gaudi 2 is demonstrating roughly 2 times higher deep learning inference and training performance compared to the most popular GPU.
We recently taped in our Gaudi 3 AI accelerator to drive even greater deep learning AI performance over the previous generation. With our FPGA products, we also continue to drive leadership. Following record revenue for our programmable solutions business in 2022 and having won every major IPU socket in the industry, we launched our Agilex F-Tile FPGA earlier this year. These FPGAs are targeted for use in high bandwidth networking, cloud, and embedded applications. Our increased investments in our FPGA portfolio are paying off with more than 15 new products scheduled to PRQ this calendar year, more new product introductions than ever in our FPGA business. Our silicon assets set us up for long-term growth in mainstream compute and give us a strong foundation on which to deliver leadership products for the most strategic and fastest growing segments of the industry, such as artificial intelligence.
Intel is committed to the true democratization of AI from the cloud to the network and out to the edge by enabling broader access to solutions and more cost-effective deployments through an open ecosystem approach. With our portfolio of CPUs, GPUs, deep learning accelerators, FPGAs, and software from compilers to developer kits, Intel is well positioned to compete and capture a significant portion of this fast-growing market. AI is comprised of a vast and complex set of workloads that include data preparation, data processing, classical machine learning, training, inference, and the management and movement of structured and unstructured data. Most AI workloads, such as data processing and analysis, are general purpose workloads that run best on CPUs for several technical and economic reasons that include the ubiquity of the x86 architecture and the high performance per TCO value of Xeon processors.
Large portions of AI require the efficiency and low latency performance that discrete accelerators provide, particularly in large model training and inference. We see the AI logic silicon market for both general purpose compute and accelerated compute combining for a silicon TAM of more than $40 billion. Our Xeon processors lead in the general purpose compute segment of AI and will continue to do so with our investments and innovations over multiple generations. As an example of our leadership, consider the most popular content that drives the most traffic on the Internet, video streaming. Intel works directly with leading content providers to use machine learning and statistical analysis to intelligently accelerate their video processing pipelines using features built into Xeon like Deep Learning Boost and AVX-512 instructions.
When content needs to be distributed to end customers, Intel works with leading communications service providers to use AI-based compute to accelerate, compress, and encrypt data moving through the network with Xeon's built-in data streaming accelerator and QuickAssist Technology. For many customers, the best solution is a flexible general purpose processor that allows them to run their AI workloads directly within the CPU with integrated acceleration that delivers cost-effective and high performance per TCO compute. This is why NVIDIA chose 4th Gen Xeon as the head node to run alongside its H100 GPUs to power VMs that accelerate generative AI models in Microsoft Azure, including ChatGPT. Xeon also delivers exceptional compute for small to medium-sized AI models that are under 10 billion parameters. These size models represent the volume of AI inference today.
As the complexity and size of AI models grow, such as in large language models, demand for discrete accelerators also grows, especially for inference. For this segment, we have Gaudi deep learning accelerators and Max-series GPUs. These purpose-built high precision compute solutions give Intel a foundation on which to deliver for our customers and partners and to compete in the discrete accelerator segment of the AI market. Together with Hugging Face, we recently enabled the 176 billion parameter BLOOM model with Gaudi 2 deep learning accelerators. BLOOM is an open source transformer-based multilingual large language model. Hugging Face recently published a blog detailing how we were able to deliver 3 times faster inference performance compared with competitive GPUs without the need to write complicated scripts.
We also worked with Hugging Face to show Stability AI's Stable Diffusion running more than three times faster on fourth gen Xeon. This is a generative AI model for state-of-the-art text to image generation and an open access alternative to the popular DALL-E image generator. In the segment where AI and HPC are converging, our Xeon Max CPUs and Data Center Max GPUs are expected to achieve two exaFLOPS of double precision peak performance in the 10,000+ node Aurora supercomputer cluster at Argonne National Lab. Our work with Hugging Face, Stability AI, and Argonne National Lab demonstrates our ability to run very large models on our heterogeneous computing portfolio. Another aspect of truly democratizing AI requires that we efficiently scale solutions to support billions of parameters. This is a challenge that Intel is addressing head-on with our ecosystem partners and the build-out of our Developer Cloud.
Computing very large models with hundreds of billions of parameters like ChatGPT requires a systems approach where networking, memory bandwidth, memory capacity, platforms, and the ecosystem are working efficiently with high-level software. With a cluster of 256 Xeon processors and 512 Gaudi deep learning accelerators, we recently delivered 1 TB per second of bisectional bandwidth and record-setting multimodal models trained to convergence with language and vision. We achieved a scale efficiency of 97%, which means performance from one node to 512 nodes scaled with almost no impact to performance. This is considerably higher than the industry average for any data center cluster. In addition to performance and scale, customers also want portability in their AI workloads. They want to build once and deploy anywhere.
Intel OpenVINO tools and libraries enable models that are trained in the cloud to be deployed efficiently at the edge, where power and performance differ significantly. Today, there are millions of downloads of OpenVINO and hundreds of thousands of developers using the tool across a broad range of customers operating in vertical industries like industrial automation, manufacturing, and healthcare. As we continue to deliver heterogeneous architectures for AI workloads, deploying them at scale will require software that makes it easy for developers to program and a vibrant, open, and secure ecosystem to flourish. To talk more about our progress in AI software, I'd like to welcome Greg Lavender, who is joining us from London. Greg is Intel's Chief Technology Officer and General Manager of our Software and Advanced Technology Group.
Thank you, Sandra. It's great to be here with you today. With nearly two years at Intel under my belt, one of my priorities is to drive a holistic and end-to-end systems level approach to AI software at Intel. We have the accelerated heterogeneous hardware ready today to meet customer needs. The key to unlocking that value in the hardware is driving scale through software. To achieve the democratization of AI, Intel is committed first to fostering an open AI software ecosystem, enabling software optimizations upstream and AI ML frameworks to promote programmability, portability, and ecosystem adoption. Second, providing choice and compatibility across architectures, vendors, and cloud platforms in support of an open accelerated computing ecosystem. Third, delivering trusted platforms and solutions to secure diverse AI workloads in the data center and inference at the edge with Confidential Computing.
Finally, scaling our latest and greatest accelerated hardware and software by offering early access testing and validation in the Intel Developer Cloud. Intel continues to contribute software optimizations upstream to popular open source AI frameworks such as PyTorch and TensorFlow to increase adoption in the AI ecosystem. Intel is one of the top 3 contributors in PyTorch, with a 3x increase in the number of changes to the source code from 2021 to the end of 2022. PyTorch 2.0 that shipped in the middle of March this year included a significant new compilation feature that we have optimized for CPU performance, yielding great results on many of the latest transformer models. oneAPI Deep Neural Network Library, oneDNN, was integrated into TensorFlow 2.9, resulting in up to 3x performance improvement to the millions that are using it.
Our work with Hugging Face combines state-of-the-art hardware and software acceleration to train, fine-tune, and predict with Hugging Face transformers and the optimum extensions on Intel Xeon scalable processor in the Habana Gaudi 2 processors. As a result of this upstream work across all industry standard AI frameworks, there are literally millions of monthly downloads by developers that benefit from Intel optimizations included as a default. We are extending this approach of a common software stack to address the need for large language models on CPUs and GPUs through our work with DeepSpeed technology, that's open source deep learning optimization library from Microsoft. We are enabling scale and speed for deep learning, training, and inference on Intel GPU Max Series. Driving software optimizations upstream into AI ML frameworks fuels an AI software ecosystem that is optimized for customers to take full advantage of performant Intel platforms.
In an open ecosystem, developers want to be able to write once, run anywhere. SYCL, an open and royalty-free C++ based programming model from the Khronos Group, offers programmability and portability across multiple hardware accelerators from multiple vendors. We believe that the industry will benefit from an open, standardized programming language that everyone can contribute to, collaborate on, and is not locked into a particular vendor, and it can evolve organically based on its community and public requirements. The desire for an open multi-vendor, multi-architecture alternative to CUDA is not diminishing. Fundamentally, we believe that innovation will flourish the most in an open field rather than in the shadows of a walled garden. oneAPI includes compiler support for the SYCL language and is our alternative to CUDA. It provides choice to developers, maximizes portability and program and compile times using a new architecture-agnostic industry standard for open accelerated computing applications.
Momentum is building, with an installed base of oneAPI software increased by over 85% year-over-year from 2021 to 2022. We have a portfolio of oneAPI toolkits available to download today, including those used for AI specific workloads that help developers build, analyze, and optimize their diverse workloads across multiple architectures. Intel has the largest number of active developers among all silicon providers, with 6.2 million developers out of a global market of all developers of 33 million. Among AI ML users, Intel has the largest number of all silicon providers of active developers, with 64% of AI developers using Intel tools. To accelerate compatibility for heterogeneous architectures, in May last year, Intel released an open source toolkit called SYCLomatic to help developers more easily migrate their code from CUDA to SYCL and C++. The results are impressive.
We are seeing SYCLomatic is typically able to migrate 90% of CUDA source code automatically to SYCL source code, leaving very little for programmers to have to manually tune. Open choice do not mean that customers have to compromise on trust when running their diverse AI workloads. The field of security in AI is ripe for disruption with the rise of Confidential Computing, which protects sensitive data in use in a hardware enforced secure enclave in a privacy preserving manner. This is particularly important in regulated industries such as unclassified defense, financial services, healthcare, and the autonomous vehicle industry. The ability to truly democratize AI will come from the ability to drive scale, lower the barriers to entry for large language models.
That means enabling access to our latest and greatest accelerated hardware and software as early as possible through Intel Developer Cloud with the beta announced in September of last year. We have the hardware availability, capacity, and capabilities for developers to use today. We offer both bare metal as a service as well as virtual machines as a service on Intel pre-release and early post-release hardware across a selection of SKUs, including our 4th generation Xeon Scalable processor, our GPU Flex Series, our GPU Max Series, and the Habana Gaudi 2 accelerator with Intel FPGAs on the way. Customers can analyze and optimize small to large emerging AI and analytic workloads using their laptops to connect to Intel Developer Cloud without expensive hardware cost.
Not only does this accelerate their time to market on the latest Intel accelerated hardware technologies, but their workload requirements will help drive incremental demand, pulling Intel's platform technology via the cloud service providers. Momentum is building, and we have 4 times the number of public beta customers since the original announcement of Intel Developer Cloud at Intel Innovation in September 2022. We expect to ramp availability over the spring and summer as we are making significant additional investments that further extend the support and validation of development of software for AI large language models, including ChatGPT and Stable Diffusion at scale. We plan to showcase early technology and verified performance and solutions at Intel Innovation in September.
Intel's holistic approach to driving open programmability and portability, enabling choice of architecture with software compatibility while providing trusted platforms for AI inference at the edge and infrastructure at scale, is the key to democratizing AI for everyone. Thank you, and now let me hand it back to Sandra to wrap up.
Thank you, Greg. We look forward to continuing to update all of you on the progress we're making to democratize AI through our hardware, software, and platform technologies. Intel's future in data center and artificial intelligence is bright. As we continue to execute on our roadmap and deliver high-quality leadership solutions on a predictable cadence, we will capture a larger portion of the market. Our 5th Gen Xeon will be delivered this year. Sierra Forest is on track to be delivered in the first half of 2024, and Granite Rapids will follow shortly after. The proven playbook we used with Xeon processors that allowed us to get back to delivering high-quality products at a predictable cadence is being deployed across the rest of our silicon portfolio with our Xeon Max Series and Flex Series GPUs, our Gaudi products, and our Agilex FPGAs.
Running a playbook where all teams are focused on our commitments, prioritizing critical aspects of our programs, and executing at a predictable cadence will enable us to be more competitive in high-growth parts of the market. We are committed to truly democratizing all aspects of AI, from the cloud to the network to the edge, by enabling broader access and more cost-effective deployments through an open ecosystem. All told, our hardware and software strategy are making it easier for application developers and service providers to build solutions with Intel and positions us for strong growth with the market and our customers. With a strengthening roadmap and excellent execution, we believe you will see our market share and margins grow as we deliver process and product leadership to the market. I'd like to join John now over at the table to take your questions.
Sandra, thank you for the great update on the DCAI business. We're now gonna transition to the Q&A portion of the webinar. Just as a reminder, we ask each participant to ask a single question and a brief follow-up where applicable, and again, try to keep the question germane to the topics at hand, which is the DCAI long-term strategy. With that, Jonathan, can we have the first question, please?
Certainly. One moment for our first question. Our first question comes from the line of C.J. Muse from Evercore ISI. Your question please.
Good afternoon, good morning. Thank you for hosting today's webinar. I really appreciate it. I guess, Sandra, the first question I would have for you is that you have a lot of irons in the fire for AI. Sitting here today, what do you see as the greatest opportunities for Intel? Should we be thinking about inference on CPUs? Should we be thinking about Gaudi accelerators? How to think about GPU efforts, and what should we be looking at to gauge success over the next 6, 12, 18 months?
Yeah. Thanks for your question, CJ. A couple things. First of all, one of the things that we know is that AI workloads are broad and diverse, and they do require a heterogeneous set of architectures to best address the different applications' requirements. Having a broad portfolio is necessary, and we see that, you know, even with competitors, adding more to their overall portfolio, really following the playbook that we've had for a number of years. If you look out in time in terms of where the fastest growth is happening, it is definitely on the inference side.
The distributed inference market, particularly for workloads that enterprises are going to want to run, is the largest portion of the growth that we see moving forward, where those very large language models, the hundreds of billions, even approaching trillion parameter models, are probably gonna be fewer organizations that can actually train full models of that size. Most of the deployment and inference will happen in a hybrid model or on-prem, where those model sizes and the fine-tuning and the fine grain approach for distributed inference is going to really favor both our CPUs as well as accelerators that can process from a performance per TCO perspective, a lot better than what you have available in the market today. What we see is just growth on the inference side.
Our CPUs are well positioned there for AI workloads where you have small to medium-sized models, typically under 10 billion, which is typically what you will see in the enterprise. For the larger models, we have both the GPUs and the AI accelerators. Pulling that all together is gonna be the software. Because ultimately the access that we are enabling the developers to come to market with really comes through a software stack, and that's the homogenizing layer of all the heterogeneous architectures underneath. Look to see more of the investments that we've made in integrating acceleration into our CPUs with the ongoing roadmap, and then the success that we expect to see with the Gaudi portfolio as well as with the strengthening GPU roadmap over time.
CJ, do you have a follow-up question?
Yeah, just a quick one. As you reflect on the near term and the benefits, performance-wise, including per watt with Granite and Sierra Forest respectively, you know, how will that impact, if at all, kind of the ramp that you see for Sapphire Rapids, and Emerald Rapids?
We're very encouraged with the ramp of Sapphire Rapids, which is on track, and we are expecting to see that 1 million units crossover in the middle of the year that we had anticipated. We have, as I described, over 200 designs that are shipping in the market now. We have over 400 design wins. All the major cloud service providers have either instances available today or throughout 2023. Sapphire is ramping beautifully and highly differentiated, particularly in those high growth workloads, AI, networking, HPC, security.
Following with Emerald Rapids, which will come out in the fourth quarter as a socket compatible drop-in upgrade to Sapphire Rapids, and really giving our customers that ROI, the return for the platform investment that they've made, we see that also a strong product, and frankly, the validation time will be much, much shorter because again, it's on the same platform. We're on track to all of the Sapphire, you know, commitments that we made and what we expected to see.
The tailwind, that we, you know, hope to see is when China comes back, and as the enterprise customers, you know, perhaps behave a little less cautiously, we are more indexed to China and enterprise than competitors, and that could be a nice tailwind for us in the second half of the year.
Perfect. C.J., thanks for the questions. Jonathan, can we have the next question, please?
Certainly. One moment for our next question. Our next question comes from the line of Ross Seymore from Deutsche Bank. Your question please.
Hi. Thanks for letting me ask a question and for all the great details. Sandra Rivera, I believe when you talked about the CAGR of the TAM, yo u increased it from a year ago at the analyst meeting. I think it was mid-teens, now you're talking low twenties. Can you talk about what's accelerating the TAM? More importantly, how does the roadmap that all appears on or ahead of schedule lead to share gains? How do you think about how the share gains evolve in that faster growing TAM?
Thanks, Ross. When we talked about the TAM historically, we've been looking at unit TAM of CPU, socket, TAM of CPUs and servers. Increasingly, what we're seeing is that the core density that goes into those CPUs and the compute density that we're able to deliver per socket is increasing. We actually have seen that over the last several years, and we see that moving forward, in the next, you know, 5 years as well, projecting out. For us, that means that we are going to see, you know, higher growth from the fact that we are delivering more core count density in our CPU portfolio.
In the performance core, of course the 4th gen Xeon, the 5th gen Xeon, and the follow on with Granite Rapids, also coming to market with the highest core count with the efficiency core swim lane with Sierra Forest next year. We are going to be accelerating the core count of our CPUs at a faster rate than we have been historically, and that compute density will drive up ASPs, and we expect to be paid for that.
Ross, do you have a quick follow-up?
Yeah. In your wrap up comment from your presentation, Sandra, you talked about the net gain of market share growing and margins growing. I wanted to pivot onto the margin side of things. Any sort of framework from an operating margin perspective of how we should think of DCAI? You know, DCG used to be 40%, 50% operating margins. Your peers are kind of anywhere between 30%, maybe 40% operating margins in their respective segments. How should we think of what the end goal is for Intel on the profitability front?
Yeah. The gross margin does get better for us over time as we continue to strengthen the roadmap and provide more of that compute density that I was just describing, Ross. It is a high fixed cost business that we have as an IDM. As we've been ramping our 5 nodes in 4 years, and specifically when we look at Intel 7, which is the 4th Gen Xeon, 5th Gen Xeon process technology that we're using, when we look at Intel 4 and Intel 3, where we'll be the lead vehicle with Sierra Forest and Granite Rapids, we have to pay for all of that process technology, and you'll see that in terms of the operating margin.
Now, the other, you know, challenge that we've had, of course, with the macro headwinds, with the TAM contraction that we saw, particularly in China and the more cautious behavior from enterprise and, you know, even flowing through somewhat the cloud service providers now as well, we still have this high fixed cost business that flows through the P&L. As we regain process leadership, as the roadmaps get stronger, and as we continue to, you know, operate in a competitive environment, but have a differentiated capability in terms of workload optimization and the leadership that we have in those high growth workloads, AI, networking, security, HPC, we do expect to be paid for that.
As the volumes improve overall for Intel, you know, that fixed cost gets a lot more spread out in terms of the product portfolios, in addition to, of course, all the work that we're doing in foundry.
Ross, one thing I might add there, I'll go back to comments that Pat made on our Q3 earnings call about moving to an internal foundry model and giving the manufacturing group a P&L for the first time. It will allow us to kind of have a better comp with our fabless peers with our BU margins, 'cause they will look like a fabless customer to the internal foundry model. We'll talk more about that as we go throughout the year. Remember, we have committed to kind of giving you visibility into that profitability for both the manufacturing and the BUs by Q1 of 2024. Jonathan, next question please.
Certainly, one moment for our next question. Our next question comes from the line of Aaron Rakers from Wells Fargo. Your question please.
Yeah, thanks for allowing me to ask the questions and appreciate the call today. I wanna unpack a little bit on the software side. You know, it's been a while, I guess, I guess for me since we've heard a lot on the update front with oneAPI. Can you help us appreciate? I think there was a comment made about 80% increase in deployment in 2022 versus 2021. I'm just trying to understand or maybe unpack a little bit of the ecosystem expansion you've seen with oneAPI and how we should kind of think about the success metrics looking forward? I do have a follow-up, Sean.
Yes. Thank you. A lot of what Greg shared is just the fact that we continue to invest in lowering barriers to entry, increasing market participation, and accelerating the rate of innovation, and that's our DNA. Open software and contributing to open source projects and building out the ecosystem. The oneAPI stack really is that, as I described it, homogenizing layer from all, you know, for all the different architectures that we have, whether it's a scalar architecture with our CPUs, whether we're talking about our GPUs, our FPGAs, our AI accelerators. All of the different workloads that will land on a combination of those architectures really need to be simplified from an accessibility perspective, and also provide workload mobility across the different architectures depending upon the needs of the specific application or use case.
The oneAPI stack is healthy, it's vibrant. We're continuing to expand the capability, not just in the core portfolio and the software tool chain that we've had historically, but also making extensions for AI specifically, and a security where we have a differentiating capability vis-a-vis competition. You know, what Greg described and the strategy that we're driving is really making our software more accessible, being in the standard distributions from the biggest ISVs in the industry, being part of all the open source projects and frameworks in terms of AI, and then even in the partnership with Hugging Face, just again trying to get more of the technology accessible to more developers, and it's actually going quite well.
One of the other things that Greg talked about is just, again, from an accessibility perspective, having our own dev cloud stood up so that we can give access much, much earlier in terms of our innovations and our products as we bring them to market before you would perhaps see them as services instances in the cloud or able to source products, you know, finish products from the OEMs.
Aaron, please go ahead with your follow-up.
Yeah. Thanks, Sean. The one question I had as a follow-up is that you gave a tremendous overview of the overall product roadmap, across Intel. One of the areas that I did not see any kind of details on is this idea of data processing units or IPUs. This idea of offloading, workloads from the traditional Xeon CPUs. How does Intel see that market segment? Is that something you expect to grow? Any update on the positioning around that?
Yeah, that's a great question. Is a critical part of our roadmap, and I did talk about how our FPGA portfolio Actually between the FPGA portfolio, where we've won most of the major sockets available in the industry and the IPU portfolio from our NEX organization, the Mount Evans product that was co-developed with Google that they're deploying in their infrastructure, but of course it's a standard roadmap product for us. We actually play extremely well, and we have the leadership position when it comes to IPU. We do see IPU, the IPU as a critical ingredient and platform in an overall data center implementation.
We have a very strong leadership position between our ASIC out of the Network and Edge organization, as well as the FPGA portfolio that we have as part of the Data Center and AI Group.
Aaron, thanks for the question. Jonathan, can we have the next question please?
Certainly. One moment for our next question. Our next question comes from the line of Brett Simpson from Arete Research. Your question please.
Yeah, thanks. John, appreciate you putting the event together today. I wanted to ask just on the accelerator roadmap, I'm a little bit confused about where you would use Gaudi and where you'd use GPU Max, and whether both actually support oneAPI. If you can maybe just clarify, you know, the Gaudi versus the GPU Max plans. How should we think about the large language model inference market? What type of accelerators is Intel thinking about here? Would it be a GPU first approach or a Habana first approach? Thank you.
Thank you. For the largest models where we have built out and demonstrated very, you know, performance, or we've demonstrated performance leadership, we would position the Gaudi portfolio both for training and then for distributed inference. I think, you know, I shared with you just the large cluster that we've been able to build with 256 Xeons and 512 Gaudis, and the type of performance that we're seeing that has been, you know, validated, in conjunction with Hugging Face and the 176 billion parameter BLOOM model that we've been able to run at better performance than the most popular GPU today.
For those very large language models, Gaudi 1 and Gaudi 2, and then, you know, coming next year, Gaudi 3 is really the strongest product that we have to compete and win in that market. The GPU portfolio today is very well positioned for HPC, and that is the product that we're deploying in the Aurora system with Argonne National Laboratory. Actually we have a number of other deployments there. Barcelona's another supercomputer that we've deployed with our GPU Max, and that's also paired with our Sapphire plus HBM or our CPU Max portfolio. For HPC and particularly, you know, high-end HPC, the GPU portfolio is positioned very well there.
having, you know, 50% better performance on some of those physics applications and deployments than the leading GPU Gaudi for the large language models. The thing that we're driving and it was, you know, perhaps hard to, you know, I wasn't as clear as I could have been in terms of bringing both the Gaudi AI acceleration capabilities and our GPU portfolio together, so that over time what you'll see is that all of the software will run across all of those architectures, and it'll be highly abstracted from the developer, so that again, you get that workload portability between architectures. We believe we have an excellent winning lineup in terms of the portfolio.
Ultimately what we'll have is one software stack for developers to very easily access the capabilities in those different architectures.
Brett, even though your first question had two parts, they were both about our accelerator portfolio. Do you have a quick follow-up question?
Yeah, yeah. Thanks. I do. I was gonna just maybe ask about anything more you can talk about around numbers in Habana since acquisition. Maybe if you can talk a bit about prospects for Gaudi 3 and, you know, to what extent you're seeing customer design wins. I think you talked about the design wins in Sapphire Rapids. Is there anything you can share with us on the Gaudi side? Thank you.
On the performance side, we are demonstrating today, and this was some of the work and some of the blogs that actually Hugging Face published just yesterday, around the 20% better performance for those very large models. The 176 billion, you know, parameter BloomZ model, the open source model, of course. You know, open source version of the ChatGPT model. When you're looking at, you know, something in the 7 billion parameters neighborhood, it's really over 3x performance over the leading GPU. When we look at Stable Diffusion, they've actually been able to demonstrate 3.8 times better performance than the leading GPU. We feel really good about the performance of Gaudi.
From a price performance or per TCO perspective, Gaudi 1 is available on AWS as a DL1 instance, and that delivers 40% better per TCO than, again, the leading GPU. We haven't talked about the design wins or the funnel or the customers that we have. We will be more, you know, more communicative on that as we move through the year. As you can see, just in the partnership with Hugging Face, the work we've done with Stability AI, some of the clear leaders in the industry, they're driving a lot of the AI innovation. We're competing very well and that team is just executing beautifully.
Predictable cadence, high quality, A zero PRQs, on time every time. I look forward to more coming from them on Gaudi 3 and beyond.
Brett, thanks for the great questions. Jonathan, can we have the next question, please?
Certainly. One moment for our next question. Our next question comes from the line of Christopher Rolland from SIG. Your question please.
Thanks for the question and thanks for the update here. We're, we're big roadmap guys and I think this is some significant progress. Congrats there, guys. My question's around Clearwater. You know, I don't know if you can provide specifics or just broad details. If you could, what process or are we still on 3? Is that the way to think about it? What platform? Anything around core counts, memory channels, any other specifics would be great.
I can't scoop myself. We'll have another webinar update on all of the goodness in the roadmap in 2025 and beyond. I did want to share and I, you know, today, around the fact that our efficient core E-core roadmap is robust. We're deeply committed to it, and we have many innovations planned in the roadmap moving forward. Starting with Sierra Forest, and you saw the demonstration of Sierra Forest executing beautifully, record power on time. Lisa showed all 144 cores working great. I mean, booting to the operating systems. Actually, multiple operating systems in less than 18 hours. The team is really energized by the type of performance and really the excellence in execution that we've demonstrated there.
With Clearwater Forest in 2025, we are landing Clearwater Forest on 18A. 18A is that culmination of the 5 nodes in 4 years. We've got 4th gen Xeon, 5th gen Xeon on Intel 7. We've got our client Meteor Lake product on Intel 4. We've got Sierra Forest on Intel 3, and Granite Rapids on Intel 3. We'll have Intel 20A, and the process technology that we've landed Clearwater on is 18A. We'll have a lot more to say on Clearwater Forest, but what I wanna leave you with is just how deeply committed we are to leadership in high throughput, high core density, and performance per watt through the efficiency core swim lane part of the roadmap.
Chris, do you have a follow-up question?
I do. That's a, that's a fantastic update. Thank you very much for that, and this will be a good follow-on to that as well. I know, like, you know, 18A is pretty significant here. Sierra on Intel 3, it's interesting that you're using data center here as the lead vehicle for process. This 18A is also interesting. I don't know quite if that's gonna be the lead or not. Maybe you can address that. Intel, back in the day, once had a plan to use data center, to lead all process, and they kind of canceled that. Is this something that you guys would consider in the future or even aspire to?
Yeah. It's a great question, and maybe the best way to think about it is as we continue to have disaggregated chiplet-based, you know, tile-based architectures, we are going to port the IP that benefits best from leading process technology to that process technology. We are going to continue to drive a predictable cadence in our server roadmap products. The convergence of the process technology and the products coming together, you know, may vary, may ping pong, if you will, between server and client. But from an IP perspective, all of the business groups are very interested in porting the IP over to the latest process technology where we benefit from the IP being on the leading node. There are blocks of IP and again, a disaggregated tile-based architecture.
Not everything needs to be ported over, that reduces cost, reduces risk, frankly, not every block, you know, an IO block, a memory block, you know, different parts of the IP infrastructure may not benefit from the latest process node. We're gonna be smart about how we design the products, how we architect them, how we bring them to market, lowering the risk, lowering the cost, and increasing the predictability and quality of our execution.
Chris, the other point I might make on the five nodes in four years, I know you know this well, Intel 4 and Intel 3 are our first nodes on EUV, and they're very similar nodes. The health of the Meteor Lake ramp in the second half of this year on Intel 4 will be a good precursor for Intel 3. Just keep that in mind as you think about the five nodes in four years. Did you have a follow-up question? Jonathan, can we go to the next question, please?
Certainly. One moment for our next question. Our next question comes from the line of Srini Pajjuri from Raymond James. Your question please.
Yeah. Thanks, John. You know, for doing this. I guess my question is more about Sierra Forest. You know, I'm just curious as to what % of the SAM or the workloads, you know, these ultra-dense processors address today. Sandra, where do you see that going, I guess, you know, if you take a five-year view? Given that, you know, many of your customers are, you know, in doing, you know, implementing Arm architecture for a few years now, I'm just curious as to how you think about the opportunity here longer term.
Yeah. This is a great question because the whole thinking behind the E-core swim lane has been us really watching the trends in the market, watching where our customers have wanted to go in terms of just the difference between what is an optimized per CPU TCO capability and the highest CPU performance with our Performance core and our traditional Xeon swim lane. As well as, you know, where we're seeing that, you know, more of the applications, microservices, high throughput, high density, are wanting something that is that is, you know, higher core count, more focused on throughput, and better performance per watt.
The, you know, the E-core swim lane was really born out of listening to customers and then wanting to bring to market something that leverages the x86 ISA, you know, tool chain and all the work that we've done for decades with software capability and workload optimization. You know, today that is still when you look at high density, high core count, and certainly other architectures, Arm architectures, is a very, you know, relatively speaking, small percentage of the overall market. We do see that growing with the continued growth in microservices born in the cloud. A lot of the, you know, just the containers and the functions as a service types of implementations. We think that this is gonna be an increasingly larger part of the portfolio.
Particularly as I was describing earlier, when you measure it by core counts, clearly this will be the higher core count part of our strategy. It's important. It's critical. We're investing in it. We have multiple generations planned. I talked about Clearwater Forest today. Clearly, we have the follow-on product already in planning as well. We do see this as a key ingredient in the overall mainstream compute that we're gonna drive for leadership in the market going forward.
Srini, do you have a follow-up question?
Yeah. Thanks, John. You know, I recall seeing a pie chart about AI opportunity, 60/40 split between CPUs versus external accelerators. Sandra, just curious, again, you know, longer term, do you think that 60/40 will remain at that level? Or do you see more and more of AI, you know, opportunity kind of, you know, getting sucked into the CPU itself, given that inferencing seems to be growing much faster than training? Thank you.
Well, we'll see. We'll see, because the market is evolving so quickly. We certainly know that all high-performance computing types of applications moving forward are going to need some level of AI, machine learning, statistical analysis, you know, data analytics. I mean, all of that in a workflow that any enterprise is going to have in their applications and in running their organization. The need to have integrated acceleration for AI across the entire complement of our CPU, this is a comment on the client side as well, is going to continue, and we have led here. We will continue to lead here.
We will continue to integrate that AI acceleration into the base CPU platform. Many of the inference opportunities are really going to be well addressed with the CPU. For those larger, you know, larger distributed inference, larger, of course, training models, we are going to need more of that accelerated compute between the GPUs, the AI accelerators, and even FPGAs do play there. You know, the good news is we have the full complement of heterogeneous architectures. Again, that software is what makes it more accessible, more portable, and protects the software investment that a lot of the customers are making today as AI continues to evolve and move as quickly as we're seeing it right now.
Srini, thanks for the questions. Jonathan, I think we have time for one last question, please.
Certainly then. Our final question then comes from the line. Tristan Gerra from RW Baird. Your question please.
Hi, guys. Thanks for letting me in. Just going back on Sapphire Rapids, I think you had said that it was competitive with 10%-15% of total server workload. Where do you think that percentage goes with Emerald Rapids and with Intel 3 next year?
I'm not quite sure I heard the question.
Tristan, you're coming in a little bit muffled. Do you mind repeating the question one more time, please?
Yeah. Sorry. Hopefully you can hear me okay. Maybe better now. You had talked in the past about Sapphire Rapids being competitive with 10%-15% of total server workload relative to your main competition and their latest product. Still, you know, obviously some catch up to do from a performance standpoint. Where do you think that percentage goes with Emerald Rapids and with Intel 3 next year before you get back to what you consider as performance leadership?
Got it. Okay. Actually, let me clarify because what I've said in the past is that the highest growth workloads are AI, networking, HPC, security. That makes up roughly, you know, 15% of the workload today, and that is growing faster than, you know, the mainstream workloads. That is an area where we have clear leadership over competition. For the broadest range of workloads, we still have leadership in many of those workloads and gen on gen, we've demonstrated an average of 2.9 times performance improvement over previous generations. We win in a lot of those mainstream workloads and applications. So, you know, we certainly don't feel like we are behind in those, you know, the mainstream compute.
What I was really describing was more of the high growth workloads, you know, being a smaller portion of the TAM today. However, we do see that, continuing to grow and, you know, just all the, you know, the craze around AI, which is good. You know, that will drive even more growth in the areas where we have strong leadership.
Tristan, do you have a follow-up question?
Okay, great. Thanks. Yeah, yeah, thanks for clarifying. Then, the role of FPGAs in data center acceleration and AI, how do you view your software positioning, you know, given that in the past I believe that's been really a primary impediment for adoption, just the lack of unified software platform for FPGAs to gain more traction in data center. How do you feel your position with software to make this happen relative to your competitors', Vitis platform?
Yeah. I actually look forward to talking a lot more about our FPGA business. I think John's gonna touch on that shortly. From an overall software perspective, we have a Quartus environment. We have increased investment in the overall FPGA business, but in the software capability in particular. I think you're going to see a lot more accessibility, a lot more ease of use in terms of the overall software stack and software platform for our FPGA portfolio. In addition to that, the oneAPI framework, as it relates to any of the AI work that we're doing, it works on our FPGAs as well.
We have been, as I was describing, you know, looking at all the different architectures that we have to compete and to address our customers' requirements and just the, you know, having a consistent software stack and tools and, you know, making all of that capability available in the co-most common distributions and, everywhere you'd go in terms of open source projects and frameworks and so forth. You know, the idea is that the FPGAs benefit from that investment and that stack as much as the other, architectures as well.
Perfect. With that, we've come to the end of the webinar. I wanna first thank Sandra, Greg, and Lisa for putting together a great content update for the DCAI business. I'd like to also thank everyone for joining us and for the really good questions asked by our analyst communities. If there's any follow-up questions, please reach out to me or my team. We'll do our best to try to help out over the next day or two. In addition, as Sandra kind of highlighted, similar to Q1, we are planning two webinars for Q2. The first is a deeper dive into our Programmable Solutions Group to give Sandra another opportunity to come up here and talk about FPGAs.
The second will be unpacking our move to the internal foundry model and why we believe giving our manufacturing group a P&L for the first time will drive better decisions and ultimately higher economic returns for our owners. Lastly, before we do sign off, I'd love to hear your feedback on today's event and where you'd like us to focus in future webinars. Please take a couple of minutes to complete the brief survey that we have. Thanks again, and we'll be talking to all of you soon.
Thank you for your participation in today's conference. This does conclude the program. You may now disconnect. Good day.