Please welcome Dell Technologies' Vice President of Investor Relations, Paul Frantz
Hello, and thanks everyone for joining us today for our 2025 Dell Technologies Securities Analyst Meeting. You'll find our press release from today, our presentation, and related disclosures, as well as additional content and information available on our IR website. Before we get started, I'd like to share with you our Regulation G and Safe Harbor disclosures. During this meeting, unless otherwise specified, all references to financial measures refer to non-GAAP financial measures, including non-GAAP revenue, operating income, net income, adjusted free cash flow, and earnings per share. A reconciliation of these measures to the most directly comparable GAAP measures can be found in our meeting materials and SEC filings. Growth percentages, unless otherwise specified, reflect year-over-year changes. In addition, statements made during this meeting that relate to future events are forward-looking statements based on current expectations.
Actual results and events could differ materially from those projected due to a number of risks and uncertainties, which are discussed again in our materials and SEC filings. We assume no obligation to update those forward-looking statements. Now, turning to the agenda, we'll begin with presentations from Michael, Jeff, Arthur, and David. We'll take a short break, reconvene for Q&A, take another break, and then host a management reception for everyone that's here. With that, let's turn it over to Michael.
Good morning, everyone.
Good morning, everyone, and thank you for joining us. It's been two years since we hosted our last Analyst Day, and we've been busy at Dell Technologies. During that last two-year period, the pace of change has been unprecedented, and the pace of our innovation has accelerated to match that. From the AI PC that you can hold in your hand to the galactic-scale implementations, we're building the technology-driven future. Our engineering, our supply chain, our customer relationships, and our services set us apart. As AI continues to expand into businesses and governments around the world, the opportunity ahead for us is massive. Customers are hungry to understand AI, and they need our help to deploy intelligence at scale. We're successfully translating that demand into growth and strong cash flow that we've largely returned to our shareholders, continuing our four-decade journey of value creation.
AI technology is now driving 45% of U.S. GDP growth, and we believe this is just the beginning. When a customer can realize productivity gains of 10% or 20%, it's interesting, but with sightings of 30% or 40%, it becomes an absolute competitive imperative. With a $114 trillion global economy, two-thirds of which is services and knowledge-based, and with the kind of productivity gains that we're talking about, AI is projected to generate an additional $15 trillion to the global economy by 2030, leading it to $150 trillion global GDP. At the core of all this growth and opportunity is data. With the overwhelming majority of the world's data created in the data center or in the physical world at the edge, it's increasing at compounding massive and accelerating amounts. That proprietary data is the fuel for our AI factories. Data goes in, and a competitive advantage comes out.
For customers, it's almost that simple, but beneath the surface and beneath that simplicity are incredibly complex engineered solutions. The infrastructure that generates tokens and creates intelligence. Hardware is cool again, and we are uniquely positioned, providing opportunities to grow across both data center infrastructure and AI PCs. For 50 years, technology was all about calculating and computing, but now we're evolving into machines that help us think and are thinking for us. What are these models creating? They're creating intelligence. How big is the market for intelligence? It's very big. It's probably the biggest market that was ever created. Even with all the innovation that we've seen over the past two years, we are still in the early stages of AI's S-curve adoption. The models that we have today, while they're impressive, they're the worst they'll ever be. Each wave of innovation builds on the next.
One-shot LLMs led to reasoning models and multimodal models. Agents combine understanding, decision-making, and action, and multi-agent systems collaborate, negotiate, and coordinate to get complex work done. The great thing about all of this is that the world is going to need a whole lot more compute and data storage and networking, which is exactly what we do here at Dell Technologies. In nearly every conversation with customers, AI is the central topic. Decision-makers want to know how AI can drive competitive advantage, efficiency, faster product development, innovation, and growth. Leading enterprises are already seeing strong ROI, treating IT spending as an enabler rather than a constraint. The other 90%, I would say, are still figuring it out, which is a massive opportunity for Dell. The momentum is clear. 85% of enterprises plan to move Gen AI on-prem within the next 24 months.
We're engaged from the very start of the journey. Before infrastructure, the heavy lifting is organizing all of their data and choosing the right models for each use case. Customers want to learn from Dell's own modernization and how we apply AI for competitive advantage in our business. When they're ready to scale with an AI Factory, Dell's already in that conversation with a broad portfolio spanning data center to edge across all industries and company sizes. We've already engaged with over 3,000 enterprise customers, and that's just the start. Over the past several years, we committed to our long-term value creation framework, and we delivered against that. We've roughly doubled earnings per share over the last five years. Earnings per share is scaling, growing faster than revenues, and we've returned $14.5 billion to our shareholders.
97% of adjusted free cash flow has been returned since the inception of our capital return program. Going forward, we're strengthening our long-term value creation model with more growth and a much higher EPS target, 15% +, to double EPS again over the next five years, with a continued commitment to shareholder return. David is going to walk us through the details. Let's turn the stage over to Jeff for more details.
Glad to be here. Has it really been two years? I don't know. For me, it feels like 40 days ago, our last earnings call. Time flies, and these days, so does the pace of change. Michael just made a case for that. What I'd like to do for the next few minutes is to talk about that pace of change, how big it is, the speed at which it's coming, and ultimately the opportunity that it presents for us at Dell Technologies. Michael said it, but I want to be very clear. This pace of change is fundamentally changing our company. It's changing the way we innovate. It's driving our innovation. It's driving growth. It's driving value to our stakeholders. Quite honestly, it's playing right into our hands, right where our strategy is.
As I talk about our strategy and operating model, I'll link this growth opportunity with the four decades of foundation of our company and why the two intersect and present this opportunity. Since we last met, things have gotten a little crazy. Would you agree? We can't have a conversation at all where AI isn't part of the topic. AI, AI, AI. That growth, that topic has accelerated, and what once felt exponential feels much more like factorial growth, bringing new opportunities to innovate, new opportunities to serve our customers literally daily. I'm going to give you a few examples to help illustrate that case, the first being in the area of AI investment. Two years ago, we stood in front of you with all of the best knowledge that we had about our industry, and we said by 2025, there'd be $200 billion of AI CapEx spent.
It's going to be over $400 billion this year. In AI hardware and services, we showed a forecast that by 2027, there'd be $124 billion of AI spend in those categories. It's now expected to exceed over $310 billion. The data center, where the rubber hits the road, where all of that spend shows up. Two years ago, we thought in the U.S., the data centers would require 245 trillion terawatt-hours of power by 2028. That number is now nearly doubled at 450 trillion kW per hour. What's driving this? It's inference. The demand for inference, long-thinking, auto-aggressive reasoning models is now requiring more computational intensity. At a minimum, 100x, two orders of magnitude, greater than we thought less than a year ago. More than two orders of magnitude, more than we thought just a year ago.
What that shows up in, in the form of tokens, the measure, and what do tokens need? Tokens need computational capacity and capability to provide them. We thought, as we modeled this, that inference would drive by 2028 one quadrillion, that's 15 zeros, one quadrillion tokens. Now it's 57 quadrillion, and I'm sure we're wrong. Enterprises are adopting this at an incredible level, and it's much beyond tech. It's about reasoning across data. It's how they power their customer service models. It's automating IT workflows. It's summarizing research. All of that work consumes tokens, and tokens translate quite simply into computational need. More AI, driving more tokens, driving more infrastructure, more AI, which is why we continue to talk about the accelerating pace of AI. Michael touched on this earlier, but I think it's important as well. The models themselves are getting better.
The progress and accessibility of today's model versus where they were two years ago is vastly different. In 2023, we largely talked about one large language model. Today, you can run an open weights model on consumer hardware that outperformed that model of two years ago. Today, the state-of-the-art is 98% cheaper and more capable than what we thought was state-of-the-art as recent as two years ago. It's an extraordinary shift. We like to say we haven't seen anything yet because it's true. The rate of which the capability is growing or what computational intensity is providing is only accelerating. It's just not large language models. There's been an explosion in the category of small language models. Today, we're tracking more than 50 performant small language models in our industry.
That's critical, and I think it's key because it runs on environments like AI PCs that reduce cost, latency, power consumption while still providing incredible capabilities around coding, task automation, IT chatbot assistance, content creation, and so much more. This evolution makes AI more accessible today than it was two years ago, more accessible than ever. It's making life better. It's raising human cognition, all leading to us as humans doing less of the mundane work and more of the exciting value creation work, the creative work long term. Enterprise companies are using AI today, using it today to improve or drive fragmented data sources, automate customer service, optimize their supply chains, do fraud detection, detect anomalies in their IT systems, even accelerate in a practical implementation, drug discovery. In other words, they're using AI to unlock competitive advantage for their specific enterprises.
These changes, I would argue, play to our favor. It's a nice tailwind for growth and value creation. We've discussed how inferencing plays a much larger role. It just keeps climbing as these deep reasoning models and agents proliferate. We've not seen anything yet. We're just at the very early. Michael showed it on his S-curve. Just beginning to see the capabilities that are here. All of this drives more need for compute capacity to drive better reasoning cycles, to drive better outcomes, to drive better prediction of your next best action, to drive better pattern recognition, and drive, at the end of the day, better decision-making. More insights, better decisions, faster. That's the outcome and why we believe this is disruptive, and every company will have to deploy this capability to be competitive in the future.
Our engineers are working on the technology at the largest at-scale clusters that do this. Enterprises are actively looking for a trusted partner like us to help them get AI adopted quickly. I spend a lot of time with customers. They ask a very similar pattern of questions. Where do I start? Is my data AI-able? Do I have the space? Do I have the power? Am I actually going to see a return on this investment? Interestingly, only 1% of leaders in companies today think they're mature on the spectrum of deploying Gen AI. Yet, 87%, 1% think they're mature, 87% of them think they're going to see AI drive revenues in their company over the next three years. 1% mature, almost all of them believe that Gen AI is a source of revenue growth for their companies in as little as three years.
Our answer is the Dell AI Factory. It helps support customers at every step of their AI journey. It is really the playbook of how to deploy AI at scale. Michael talked about this notion of exponential data growth. 80% of that data will be unstructured. Not text. Think about it as video, music, multimodal, all types of rich content, rich data, mostly derived at the edge, coming at companies that have to do something with it. To support those unstructured data at scale and the speed of what is going to come at you, you need a real AI-optimized storage and networking portfolio, one of which we have the leading position in the marketplace. Those AI workloads, when you look at the storage architecture, need to run on a disaggregated storage architecture. Why? For performance and scale.
Customers seeing 83% faster read throughput of their AI data lakes and data with disaggregated storage architectures. Fundamentally, the storage architecture has to change to feed the beast, to feed these computational engines. The way to do that is with a disaggregated architecture. Arthur, I'm sure we'll talk about this in a bit. When you think about Dell AI, AI data platform, that is a necessary foundation for customers to actually take advantage of the technology. We think we have a game-changing opportunity here in the storage area to help our customers with AI data and a storage architecture that delivers that. Lastly, I'd be remiss if I didn't talk about these models running locally on PCs, improving the latency for time-sensitive tasks, saving network bandwidth, allowing you to operate disconnected in many ways.
The punchline, the good old PC continues to be a great productivity device, even in the era of AI. It's actually essential for doing AI at the edge. It's not going anywhere. We've positioned ourselves from our PC portfolio all the way to the galactic mega clusters Michael talked about earlier. That spectrum is what we do. We've positioned ourselves for that opportunity and we're in it to win it. You're going to hear that consistently throughout the presentation. Let me switch gears a little bit and talk about why we're uniquely positioned. As an engineer, maybe this won't surprise you, but I'll start with why we think we are in position to win and go after this opportunity is our engineering expertise. We've been building large-scale systems for many, many decades, deploying in data centers for many, many decades. This stuff is hard to do.
The engineering step function to deliver these at scale is significant. We're building these specialized custom solutions of tens of thousands of GPUs for the biggest names in our industry: xAI, CoreWeave, ServiceNow, G42, Mistral, to name a few. We're designing these clusters of over 100,000 GPUs that scale exponentially. We're not working off a reference design. We're not working off a customer's bill of materials. We're engineering bespoke, optimized solutions that solve what customers care most about: performance per dollar and performance per watt. That does not come from a reference design. That does not come from a bill of materials. That comes from hardcore engineering, understanding what the customer problem set is, the environment we're working in, and then delivering an optimized solution for them. What it really means, if we translate in simple terms, we're optimizing for their data center. It's more than the node.
It's more than the rack, a row of racks. It's for the data center. It's beyond the things that you would expect of compute, networking, and storage. It's power management. It's cooling. It's software optimization, software management. These are areas that we've invested in now for many years that we're paying the dividends from and seeing it as a key to our differentiation in the marketplace. One of the engineering concepts we've used in this area is what we call an engineering pod. We've created these engineering pods, invested in the engineering capability to assign a pod to our largest tier two cloud service providers, to our sovereign and enterprise customers, working directly with them on their needs. It allows us to extend the utility of how we apply our engineering skills to each and every customer opportunity. It's beyond compute, beyond networking, beyond storage.
It's really engineers that do thermal design, power management design, data center engineers to help deploy these dense fabrics and dense infrastructure in very small spaces to optimize for performance per watt and performance per dollar. These engagements, as you might imagine, with these very, very large customers are heavily engineering-led. We work with our customers on their next tranche of deployment. We take what we get from each of those interactions and move through the entire customer base into the entire product design. We go through multiple designs, multiple iterations in a very short period of time. The idea is you go from the initial concept to a design to delivery in very short order.
That process does take time, but we've compressed that in what I hope to show you or talk about in a few seconds in a very, very small time capsule, which is differentiated for us and allows us to take that deep engineering expertise and to rapidly deploy our products faster than anybody in the marketplace today. One of the reasons we win is what we call rapid scale deployment. Our execution time to deployment and installation is differentiated. These customers have their own backlog they need to convert to revenue, and time to market is essential or is competitive to them. If you look at what we've done over the past two years, I think it's quite impressive. First to market with the GB200, two to six months ahead of our competition. First to market earlier this year with the GB300.
We put 110,000 GPUs, a liquid cool infrastructure in a data center that was operational in weeks. That's 27,500 nodes, 1,536 racks, and over 6,000 switches. All up in very short order, a handful of six weeks, driving tens of trillions of tokens when it's up and operational. It's laid the foundation for this, which is the best in the industry. When we deliver a GB200 rack or a GB300 rack to our customer, it's up and operational in the data center and up in 24 - 36 hours. From our dock to our customer, installed in their site 24 - 36 hours. We believe that is a huge differentiator, and it's why we're winning in the marketplace today. You couple that, which allows us to be able to meet those timeframes with our deployment and installation services. We're unmatched.
It's why you're seeing the momentum in our business to date. Our deployment, when a customer's thinking about our services, they do deployment installation, but beyond that, we're with them every step of their service journey. Comprehensive support, deeply engaged throughout the full life cycle, maximizing what they care most about is uptime. Uptime of these very large capital investments is key, and we believe we've cracked the code. In some of these largest customers, in these very, very large clusters, we've seen uptimes of our portion of the deployment north of 99%. At scale design done in a short period of time, delivered very quickly, once off the truck, installed and working, uptime north of 99%.
We wrap around that a partner ecosystem that is unmatched and deeply rooted in a customer choice, whether that's NVIDIA, AMD, Intel, Hugging Face, Meta, Google GDC with Gemini, OpenAI, xAI, Cohere, Red Hat, Glean, to name a few. These partnerships are a collaborative effort that allow us to ultimately make the technology for our customers easier to deploy and happen fast. Speed and easy. That's our goal. We round out that capability with the slide that you see in front of you with the financing capability that we have. Our bank allows us to have flexible, competitive financing offerings for our customers. Together, we believe these capabilities give us a competitive advantage in the marketplace. We see it with the large-scale providers across the world, and it extends even to a broader set of customers. When I think about customers, we generally think of them in three categories.
They're on the page here in front of you: enterprises, sovereigns, and tier two cloud service providers. We're using the knowledge that we gain from our tier two cloud service providers, and it trickles down. Those at-scale designs, at speed, and those customers coming back to us, tranche after tranche after tranche, allow us to take that information and inform us to continue to tweak and improve our designs. It trickles down to the other two categories of customers: our sovereign customers and our enterprise customers. Many of those enterprises are really thinking about how to deploy this, as I mentioned earlier, fast. You think about sovereigns. They're very much like our tier two cloud service providers. Very large at-scale deployments, similar technical needs. We're still in the early days of the sovereign opportunity. We have several wins that we're very proud of.
You've probably heard us talk about it, but one of them is with the United States government, the Department of Energy, the Nurse 10 supercomputer with G42, in scale to name a few, many more in our pipeline. That trickles down to our enterprise customers, which is our bread and butter. We've been serving enterprises for over four decades. Michael talked about the number. It's a number that we've publicly spoken about and continue to reinforce of the enterprise opportunity. Over 3,000 Dell AI Factory deployments to enterprise customers today, and it's growing. Robust portfolio. Many of our customers are in this piloting and testing phase. They're moving to production. I mentioned examples of how they're moving to inference. As I described, 1% are in the mature category. There's so much more to do in enterprise. The opportunity is immense. There are many that haven't started.
Quite frankly, if they haven't started soon, they're going to fall behind and become uncompetitive in their sectors. The opportunity for us is to accelerate that, expose the Dell AI Factory to more of those enterprise customers to help them deploy AI quicker and make it easy for them. We have a solution for every vertical, for every form factor, on-prem, at the edge, with an ecosystem to support it. I'm going to go through a few products to give you a sense of the breadth of that. The first will be the PowerEdge XE9680, with eight Blackwell B300 GPUs that run inferencing 11 times faster, direct liquid cooling, up to 256 GPUs per rack. The PowerEdge 7725, the XE9680, the XE9640, with the RTX Pro 6000, which essentially, think of this as a great enterprise offer.
It is an air-cooled PCIe option at a value price, a very attractive price point. You take that with our storage portfolio and the AI data platform, where you take our fast, scalable file and object assets, PowerScale, ObjectScale, with its leading density, performance, efficiency, and manageability. You add on top of that Project Lightning, which we talked about earlier last year around a parallel file system. You add Project Dynamo to it, which basically puts KV caching for inferencing. Then you add our Dell Data Lakehouse, and you have a streamlined way to ingest data. That portfolio fundamentally is differentiated in the marketplace for enterprises, and it gives us an opportunity to attach more around it, networking, and more of our storage assets. For us, those 3,000 customers that I talked about, they're seeing real returns on their investments, their use cases, and they're coming back and buying again.
Not a one-time pilot. See real return on investment, more use cases coming back to buy more AI, and we're seeing great traction and a great start there. Shifting gears a little bit, the engineering curve is getting harder. As much as I think different engineering differentiates us, I think it continues to differentiate us because the design requirements are becoming more steep. If I put this in the context of the demands for the Dell AI Factory design, they are evolving at breakneck speeds because the technology coming at us is driving us to do that. Whether that's power, density, cooling, it's now front and center. The architectural choices, the architectural innovation is not slowing. In fact, it's accelerating. There are more opportunities for us to innovate, given the picture that you see in front of you than ever before.
It wasn't too long ago that the standard data center was roughly 10 kW- 12 kW of power. Our first rack scale designs were 120 kW of power. Today, over 200 kW of power per rack, on a path to 500, on a path to 1 MW. If you look across the portfolio of technologies, I'll pick one here for example, the Silicon Roadmap of NVIDIA. We go from Hopper to Blackwell to Rubin to Feynman. It's this type of power and density in front of us. More power per rack, more GPUs per rack, driving more innovation and opportunity for us. We are staying ahead of designing what's next and building what we believe is essential and key for our customers. How do we build the densest, most power-efficient clusters, maximizing every inch of data center space given, and providing that in an optimized, fast way?
The biggest challenges: cooling, power, and software. I'll walk through those real quickly. On the cooling side, we have to get the energy density off the GPU and out the rack. We're developing a complete cooling system from the ground up. We're looking at new materials for better thermal heat transfer, so the thermal interface from the chip to the heat sink and out, being able to get that energy density out. We're designing cold plates. We're designing smart manifolds, leak sensors, interact cooling units, all our designs, all our IP, all our opportunity for us to differentiate. We're working with the power manufacturers to explore new materials like gallium nitride, silicon carbide to improve power density. We're designing a new power shelf to meet the demands of what you saw again in front of you here of all of that power in each one of these GPUs.
For software, it may seem trivial that, oh, software management and updates ought to be easy, but there are many, many components in a rack, much less rows of racks of software that have to be optimized on a weekly basis and in some cases, daily basis. Doing that across a cluster of 100,000 GPUs is a pretty significant challenge. Testing the orchestration to ensure flawless execution and not have downtime. Remember, 99% uptime cannot be impacted by updating things, which really takes me back to one of the core tenets of why we win: services. It takes a lot of services to deploy and install, and I think going forward as AI evolves, services will play an even more important role.
If I summarize that and why I think services matter and what we're doing from an engineering side and staying ahead of the curve here, we're doing what we always do. We're managing the complexity. We're staying ahead of the curve. We're designing for the innovation that's coming, and we're doing that at record speed, something I've never seen us do at this speed in my near four decades at the company. As we continue to execute aggressively and we look at our operating model and our strategy, Michael showed it. We talked about this. Nothing's changed. What differentiates us are our leading end-to-end solutions, our industry-leading go-to-market model, our supply chain, and services. The four tenets of our operating model remain unchanged, and the strategy of the company remains unchanged. These are what make us who we are. This is what makes us differentiated in the marketplace.
Quite frankly, it's where the investments go. It's how we're differentiating and building new capabilities across the company. What I thought I would do is maybe a little bit of a teaser. You've heard us on some of the earnings calls talk about how we're applying AI in the company, how we're taking that operating framework that I just showed you and making it stronger and giving you some insight of a few ideas, more than ideas, things that we are doing in the company that many other companies can do the same. We started in a place three-plus years ago. What are we really doing here? We found we were doing everything. We had an environment, if you called it AI, it got supported, even though it wasn't AI. We cleaned all of that up. We found 900 projects, most of them not AI.
We found that we didn't have a well-articulated strategy. We didn't have an infrastructure that could scale, and we had a data environment that wasn't ideal for AI. We fixed all of that. We put an AI strategy together. We built a data mesh across the company. We put in the infrastructure. Once we had that foundation, we were able to address the direct needs. What were the enterprise use cases? How would we use those enterprise use cases in the four areas that differentiate our company? You've heard me talk about this on earnings calls, but the four use cases or the six use cases generally used in enterprises today are content creation and management, support assistance, natural language search, design and data creation, code generation, and content or document automation. Amongst that, we picked four, and we applied those to the four areas.
I'll give you a quick drive by of what we've done. In R&D, we've taken the idea of coding and knowledge assistance, and we're accelerating and improving our product development cycles. We have become more productive. We've reduced cycle time. We can do more work, provide more features in shorter periods of time. I talked about this at DTW, but we've implemented a service assistant. We call it next best action, the ability to have a little assistant on every service agent in our company to help them navigate a customer's challenges. The result has been improved customer satisfaction. The result has been increased efficiency and productivity in our service organization. Today, I've talked about this as well on our earnings calls.
We're using predictive systems and digital twins in our supply chain to provide a more resilient and more responsive supply chain, particularly in this day and era of a changing environment and changing sometimes daily. In sales, a very exciting area for us, we've actually taken a sales chat assistant to improve seller productivity. What we've done is we've taken all of the company's internal information. We've linked it to the external market data that we have, and we're providing our sales force with fast, accurate answers at their fingertips. They can get product insights. They can get product intelligence. They can get other customer wins. They can generate content. They can write a proposal to a customer.
We can intersect the customer wherever they are in their customer journey, all across one system, one interface that works nearly instantly and providing that to each and every one of our tens of thousands of sales makers. Inside the company, it's like magic. It's the most modern sales tool we've ever given our sales force, and they're utilizing it. As I mentioned, we've been acting as customer zero for AI implementation. We are seeing real returns on our investments. We're driving that efficiencies. You see it as we communicated in the bottom line performance of the company. We share this journey and share our knowledge with many, many, many, many customers. That insight helps them. We're providing the leading infrastructure solutions to help them deploy once they've made the decisions, AI to get to their competitive advantages faster. Quite frankly, we're just getting started. Maybe a few closing thoughts.
If Michael and I haven't done anything other than the rate at which our industry is changing is unprecedented and we've not seen in our four-plus decades. It's truly remarkable, and we see no signs of it slowing down. Our strategy is unchanged. We will continue to execute our unique operating model that we built and fine-tuned over the past four-plus decades. We're not, hopefully, I've communicated, we're not just keeping up with the industry demands. We're actually driving the industry, shaping the future of AI infrastructure. I believe we were built for this moment. The trends are working in our favor. We're excited how things are shaping up. We're all in it, in it to win it. With that, I'll turn it over to Arthur to talk about ISG.
Good morning. It's great to be back in New York City. Hope everybody's doing well.
As Jeff has talked about and Michael, the technology industry is driving groundbreaking innovation as we usher in the transformative era of artificial intelligence. Dell Technologies is engineering and scaling the infrastructure to make that happen. The opportunity ahead of us is extraordinary. Estimates of AI spend continue to rise. Over the next three years, more data will be created than in all of preceding human history. By 2030, data centers around the world will require nearly $7 trillion in investment just to keep pace with the torrid demand of compute. These forces will combine to create a significant need for infrastructure and services that can turn data into intelligence, complexity into clarity, and to do so securely, efficiently, and at scale. The rapid developments that we see in artificial intelligence will also disrupt traditional data center architectures. Disaggregated will reign. Silos will disappear. Data will flow seamlessly.
What was once known as dark and cold data will become observable and active, constantly in circulation, feeding AI engines and agents. Our role as a trusted advisor has never been more important, and we are uniquely positioned to guide the architectures of the future. With our winning portfolio of compute, network, and storage, we are empowering customers to deploy artificial intelligence where it matters most, close to their data, whether on-prem, at the edge, or in the cloud. We begin from a position of great strength. Our shared leadership position, coupled with world-class capabilities in our supply chain, in our services organization, our go-to-market engine, position us well to continue to capture, share, grow margin, and win the next wave of AI. Over the last several years, ISG has delivered durable, consistent performance.
Since FY 2018, ISG has grown revenue at a CAGR of nearly 8%, operating income at a CAGR of 9%, and we've expanded operating margins 140 basis points. We are number one in compute and storage, with share positions greater than our next two competitors combined. In compute, we have led in revenue share for 33 consecutive quarters. Over the last decade, we've gained over 700 basis points of share. If you exclude China, we've gained 1,767 basis points of share. Over that same period, Dell Technologies has captured 50% of the market growth in compute, more than the next three competitors combined. In data storage, we have been the revenue leader for 94 consecutive quarters. We're number one in every major category: external RAID, entry, mid-range, high-end, and purpose-built array. In Q2 of calendar 2025, we were not just number one in all-flash.
We grew north of 25%, a very strong premium to the market, gaining 244 basis points of share. On top of all of this, we grew a new business, our AI-optimized portfolio, to at least $20 billion in just two years. We now service over 3,000 enterprise customers and many of the largest and most relevant tier two cloud service providers. Given our track record as a structural share gainer and with AI as a significant tailwind, we are once again raising the ISG long-term growth framework revenue CAGR from a range of 6% - 8% to 11% - 14%. Our durable and consistent performance is ensured by a singular strategic focus on customer-centric innovation and a first-to-market mindset in everything that we do. Last year, we were the first to ship an NVL 72 GB200 rack. That was no small feat.
Still, we repeated it again this year as the first to ship an NVL 72 GB300 rack. We introduced PowerCool, the most technologically advanced cooling solution in the marketplace that includes custom design and engineered cold plates, manifolds, cooling distribution units, and enclosed rear door heat exchangers, all under single pane of glass for simplified management. In addition, we've increased the mix of our software developers that are focused on product delivery by 18%, ensuring faster innovation for our customers. We've also united the entire software development apparatus under a single agile model and a common CI/CD motion, increasing feature velocity, increasing quality, all augmented with AI tools for accelerated development and expanded functionality. With these changes, we are moving extremely fast, delivering features to customers on a quarterly basis. In the areas of the portfolio where we're most advanced, we've seen velocity increase upwards of 45% year-over-year.
We expect to see similar results across the broader portfolio as we mature. Given our financial strength, our share position, our operational excellence, and this customer-centric focus on innovation, Dell Technologies is well positioned to extend its leadership. Let's begin with compute, where PowerEdge is the undisputed backbone of enterprise IT. For over 17 generations, Dell Technologies has led the way in compute innovation. Our latest generation of servers is, again, redefining what's possible. This is our most dense, power-efficient, secure, performant generation ever, designed to meet and exceed the rapidly evolving demands of enterprise customers. PowerEdge is engineered for the dual reality of IT, balancing the performance and efficiency of traditional workloads while delivering the acceleration and scalability needed for AI and data-intense workloads. Whether you're talking about core business applications or inferencing at scale, PowerEdge is the compute platform that is making that happen.
Our servers are equipped with intrinsic security, advanced automation, and cutting-edge liquid cooling technology, giving customers the ability to lower their TCO, consolidate legacy infrastructure, and reduce environmental impact, consolidating workloads and preparing for the exponential growth in compute. The opportunity for transformation is significant. More than 70% of our install base resides on servers that are 14th generation and older. This is not just a statistic. This is a clarion call for action. Refreshing to 17th generation servers allows customers to reduce power, floor space, bandwidth, consolidate workloads, and prepare for AI adoption. For customers who are ready for AI, as Jeff Clarke talked about, our portfolio is built for performance and scale, 14 times faster training of large language models, 11 times more compute for accelerated inference, and direct-to-chip liquid cooling for optimal efficiency, even at scale.
In short, no matter how you look at it, PowerEdge is the compute foundation of the data era. As AI elevates data as a key differentiator, our Dell IP storage portfolio is there to unlock value. In this age of AI, an organization's greatest competitive advantage is how it utilizes its data. How an organization manages, secures, and scales its data will increasingly separate the winners from the laggards. To be successful, organizations must be able to streamline disparate data silos, provide seamless data access, streamline access, and provide premium, high-grade data to support AI workloads. The Dell IP storage portfolio is there to help customers realize the value of their data as a competitive advantage. As we talked about, Dell Technologies is, by wide measure, the leading provider of data storage infrastructure and software with very deep enterprise relationships.
This scale gives us unique insights into how customers are navigating a world where traditional and modern workloads must coexist, positioning us to guide the future data architectures that will redefine how customers think about their data. To start, the simplification of IT is an absolute must. Businesses around the world are moving to a multi-hypervisor environment, supporting virtual machines, containers, and bare metal. This requires flexibility to avoid lock-in in a disaggregated infrastructure of compute and storage, each scaling independently, in order to provide a 22% reduction in cost versus a more inflexible and more costly HCI solution. This is where our traditional portfolio of PowerStore, PowerFlex, PowerProtect Data Domain All-Flash, augmented by the Dell Automation Platform, come into play to help customers modernize core business applications with flexibility, efficiency, and with resilience. Let's start with PowerStore, the world's leading midrange storage array.
With a modern container-based operating system and the industry's only five-to-one data reduction guarantee, PowerStore is the gold standard for enterprise storage. Named number one in innovation and ease of use, PowerStore is now trusted by over 17,000 customers globally and has grown double digits in revenue in each of the last five quarters. PowerStore also comes with access to the Dell Automation Platform, which greatly simplifies infrastructure simplicity, allowing customers to deploy their cloud operating system of choice across a wide variety of PowerEdge servers and PowerStore, ensuring agility, control, and tremendous scale by being able to expand compute and storage independently. Next, PowerFlex Ultra, our latest software-defined storage release, greatly improves storage efficiency and reliability, delivering ten nines of data availability and up to 80% storage efficiency. PowerFlex Ultra helps customers to reduce costs by maximizing resource utilization while providing robust redundancy without unnecessary replication.
Next, PowerProtect Data Domain All-Flash delivers world-class cyber resilience with speed and efficiency. With up to 544 TB of usable storage per node, it delivers four times faster restore times, two times faster replication, 80% less power consumption, and 40% less floor space. For artificial intelligence, as Jeff said, we've introduced the AI Data Platform, which is powered by PowerScale and ObjectScale, our data engines, our engines for unstructured data. This platform is designed to deliver the performance, scalability, and security to support AI workloads and enterprise-wide deployment of agents. Our Dell IP storage portfolio is not only built to win in traditional workloads such as private clouds, emerging workloads related to artificial intelligence, and to support all mission-critical workloads across the world with our cyber solutions.
This portfolio is also geared to expand margins, not just by the selling of more of the Dell IP storage, but being able to extract more value for the solutions themselves. Our storage innovation engine is firing on all cylinders, and it positions us well for future growth, margin expansion, and to strengthen our leadership in the data era. This foundation is critically important as we look to what comes next. AI workloads are scaling from cloud-native companies into enterprise IT in industries, in data-intense industries such as healthcare, finance, manufacturing. These deployments will remain hybrid, requiring performance, secure, cost-effective solutions both on-prem and in cloud-connected environments. We are in the very early innings of enterprise adoption, and we are making very good progress quarter-over-quarter. We sell to more and more customers on a sequential basis with a heavy focus on compute.
As customers move into production, they will require a full stack solution. Our value proposition here with the Dell AI Factory is very strong. We are one of the few companies in the world that can design, engineer, manufacture, deliver, integrate, service, and support fully integrated solutions that include the compute, the network, the storage that is optimized for AI outcomes and to accelerate time to value. Our solutions build on our unique partnerships and the world's broadest ecosystem of AI partners, including OpenAI, xAI, Google Gemini, Meta Llama, Hugging Face, Cohere, Red Hat, Glean, and so many others. We are the leading partner with NVIDIA and AMD, building token generation engines of all sizes to meet very specific customer needs. We are that one-stop shop partner, guiding customers from model selection to infrastructure while providing for data ingest, fine-tuning, and agentic workflows that maximize value.
Our AI Factory is already powering leading industries around the world. A couple of examples: CSX is a leading transportation company. CSX has partnered with Dell Technologies to deploy an AI Factory that includes XE servers and the native edge operating platform in order to improve operational efficiency and to deploy real-time analytics at the edge to reduce risk at railroad crossings. This deployment underscores CSX's commitment to use AI to do both, improve operational efficiency and reduce risk, while underscoring Dell Technologies' unique ability to meet a very specific use case. Another example, Hudson River Trading is a global leading quantitative trading firm that is powered by AI research and technology. We have partnered with Hudson River Trading to deploy an AI Factory that includes liquid-cooled XE servers and the M7725.
These high-performance systems are built to support Hudson River Trading's demanding AI and quantitative trading workloads with high performance, scale, and incredible compute density. This deployment underscores Hudson River Trading's commitment to using AI to drive innovation in quant trading and machine learning, while underscoring Dell Technologies' unique capability to support even the upper echelon of the financial services industry. We are a trusted partner in regulated industries addressing sovereign AI needs of enterprises and governments who demand control over their AI model and their data. Our infrastructure allows for data residency, security, and compliance with AI mandates without sacrificing compliance. We are in the very early innings of this. We service over 3,000 customers, and we have 6,700 customers in our opportunity pipeline. We are extremely excited about the expanding opportunity for growth in this area. In closing, our story here is pretty simple.
Dell Technologies is not simply participating in the era of AI. We are engineering it. We are enabling it. We are leading it. The world's data will double over the course of the next two years, and AI is transforming how that data will be used. Dell Technologies is the engine behind that transformation, modernizing data centers, powering AI, and securing the world's mission-critical workloads. This positions us well, not just for the next technology cycle, but for durable revenue growth and margin expansion for many years to come. Thank you for the time this morning, and welcome Jeff back.
Did you see lots to be excited about on ISG? I think the same is true of our client business. I thought maybe we'd spend a couple of minutes level setting about our client business, and then I'll get in to tell you what we're going to do about it.
First, if you look at the chart, we've been able to build a business that's scaled, adapts to whatever changes the market has gone through. It's quite a resilient business. We've made it through the ups and downs. One thing I think is consistent is our execution and the discipline of that execution, delivering a steady operating throughout that period of time, and our commitment to value creation and commitment to this business overall. We're proud of the results you see here. We're number one in very important categories like commercial PCs, workstations, and displays. Over the balance of the decade, we have gained share and grown the business. I tell you that doesn't happen by chance. We've been at this for four decades. We built long relationships or many relationships that we've had for a long time with our customers.
We've driven customer-inspired innovation across the entire portfolio for many years. I've been at this a long time here. Most of our 41 years in this business, I've been part of this. I'll tell you, while the PC may feel like a device that's been around for a while, we've not seen anything yet. Its role in productivity, its role in AI at the edge, the story has not been written yet, and we're very excited about that opportunity. I think about this refresh cycle that we're in, and it's been a little slower than we expected, but it's been steady. It continues. This refresh cycle is a large one. The installed base is 1.5 billion units. Many of the enterprise fleets that are out there deployed today are three to five years old.
We are just days away from Windows 10 end of life, and there are still over 500 million PCs that need updated hardware to make that transition. It continues to be a massive opportunity for us. The opportunity around the PC, what I call the PC estate, the peripherals, docks, displays, keyboard, mice, cameras, microphones, is absolutely larger than the PC market. The two of them together present a large opportunity for our business. In fact, the peripheral market is actually expected to grow slightly faster than the PC market itself, which provides tremendous opportunity. You take that foundation and you tie it to what I think is encouraging and what I kind of led with. The AI ISV ecosystem is ramping up. We're beginning to see PCs with NPUs. We now have an ISV community that's taking those NPUs and taking advantage of those capabilities.
We're tracking well over 100 of them. They're building applications and new usage patterns for those NPUs that will roll out and increase the utility and extend the capability of the PC. We talked about the smart small language models earlier this morning. They're going to become even better. I think Michael said it or Arthur said it or some combination of us have said it. The models are no worse than they are today. They get better tomorrow, the next day, the next day. The same is true of the small language models. Customers are actually using them today. They're using them to drive on-device productivity for things like content creation, translation, and research without a cloud dependency. We see retailers using them in stores to do customer assistance like return handling.
We see healthcare providers use them to automate appointments, summarize appointments, do diagnostics, all done on the PC, all keeping that sensitive customer information on-prem. Those use cases are just the start. ISVs are going to deliver more capability, more applications that will embed the need for an NPU. We can talk about what comes next, which is agents. Building the base capability of AI in the PC, then the promise of agentic coming and coming quickly only does the following. It extends and expands the utility of the PC, making it essential as we go forward. You put all of that together, we see lots of opportunity. One of the things I've had to answer over our earnings calls is, are we adjusting our strategy? What about our strategy? What about share, growth in the business?
I thought I'd spend a few minutes talking about strategy and then ultimately leading to what we're going to do in three quick and three steps to improve the growth prospects of our PC business. The first is this one, the premium space. We talk about it all of the time in our calls. Most important real estate in PCs, the premium PC continues to be our primary focus. It's where we've invested. We've seen strong results. It's roughly 25% of the market. We have taken share there. This area has grown 6% since 2019. Our share performance is up three points. It's where we needed to be. It's what we did, and we took share. However, in the PC business, it is a business of scale, and 75% of the units aren't in the 25. It's the area where we have lost share.
There are very important categories in this area that we should be in that we have not been in aggressive enough. My message to you today is you should expect us to be in the categories that matter in this 75% that we have lost share in. You might ask, what are some of those? Clearly, there's the premium consumer business, but more importantly, there's the education market. There's the lower price span or emerging commercial PCs across our industry. In the PC industry, again, this is a business of scale, and scale matters. We have to participate. It's that simple. There are areas of the market that we have not been as strong in, and you should
Expect we're going to be strong in them again. I wish I could tell you there was more to the strategy. That's it. We're going to be more aggressive and play in the other 75%. We've won in the top 25%. That's very important to us. The other 75%, whether that's premium consumer, whether that's the education market, the emerging commercial market that I just described, we are going to participate aggressively. We can do this and still maintain the operating margin commitment we've made to you, 5% - 7%. I know we can. We've done it before. We'll do it again. We're going to lean into these opportunities. We have a great foundation, four years, four decades in the making. We're making the adjustments as we speak. I'll talk about a specific example in a moment, but that's what we're going to do. Yes. What are we working on?
How are you going to do that? There's three parts of our strategy here: winning commercial, fixed consumer, peripherals. I'll go into a little more detail, but that's all I want you to remember about the PC business today. Take share in commercial, fixed consumer, sell more peripherals. That's it. We do that, all the numbers on the charts that Dave is going to show you in a minute, we're going to get there. How are we going to do that? We're going to regain commercial minimum. We're going to drive for share, balancing profitability. Our commitment hasn't changed. We're going to continue to optimize the portfolio. We're going to make it easier to find stuff on Dell.com. We're going to make it easier for our sellers to sell Dell Sales Chat. We're going to make it easier for the channel to sell our products.
You might recall I called it out on our earnings call 40 days ago. We launched the Dell Pro Essential, which was targeted for that emerging commercial market space and those price spans in developing countries where we see a big opportunity. We're going to continue to focus on small and medium business where we believe we have an advantage go-to-market model. You'll see us continue to focus there. On the fixed and consumer, it's pretty darn simple. We need improved profitability. We need to have more products to cover the marketplace. More products, the right product at the right price in the right go-to-market, whether that's Dell.com, whether that's in a retail store, or whether that's through the channel. Right product, right time, readily available. Some marketing behind it to tell people we're in it to win it. The consumer business is important to us.
You can't be in the PC business and ignore 45% of units. The message here is we're improving profitability. We are going to play in all price bands in consumer. The third part of the strategy I just talked about, we're going to expand margins by selling more stuff around the PC. The estate around every desktop, a dock, a monitor, a mouse, a keyboard, a speaker, a camera, anything else you want, Dell.com branded. We started that work two years ago. That work continues. We think it's a big opportunity in this area that we've targeted. It's a $50 billion TAM with accretive margin rates. Three things I want you to remember: walk away with about the PC business. Going to take share in commercial, improving consumer, and going to participate fully in all price bands.
We're going to take advantage of the PC estate and our footprint in selling and go-to-market capabilities to sell more peripherals. That's it. That's the strategy. That's the story. That's what we're going to go do. We have a strong foundation. Hopefully, you see a simple, elegant, but an aggressive strategy. Not happy with our performance. It's going to change. There's lots of opportunity around this marketplace. Our team's excited about it. This is an incredibly important business to us. The reason I am saying we're going to change it, David will build upon this. This is our most capital-efficient business in the company. When it grows, it generates significant cash, and that cash creates long-term value for our stakeholders. On that note, I'll turn it over to David.
Good morning. Great to see everybody. You've heard from Michael, from Jeff, and from Arthur, and I'm pretty excited just to tie all that together now and talk about our financial priorities. We've increased our long-term value creation framework and remain committed to our capital allocation plan. As Michael touched on earlier, our primary focus is threefold. First, drive revenue growth. Second, EPS growth faster than revenue. Third, generating strong cash flows, which are largely returned to our shareholders. When you look back at our track record over the past four decades, that's exactly what we've done. Over the past four years, we've steadily increased value for our shareholders. In fact, since 2021, we've more than doubled our revenue and EPS framework, all while remaining committed to more capital to our shareholders.
This morning, our new long-term framework targets revenue growth, 7% - 9%, driven by continued strength in AI, EPS growth of 15% or better, net income to adjusted free cash flow of 100% or better, targeting a return of 80% + of adjusted free cash flows for our shareholders, and an extension of our dividend growth commitment through FY 2030. Let's spend a bit of time talking about how we're going to deliver this framework. Let's start with revenue. We expect revenue growth of 7% - 9%, up from 3% - 4%. This is underpinned by 2% - 3% growth in CSG and 11% - 14% growth in ISG, up from 6% - 8%. Jeff and Arthur have touched on the strategy and a lot of the tailwinds both in ISG and CSG, but maybe a few points just to reemphasize.
In ISG, we will continue to drive innovation and growth across our AI offerings. We'll increase our Dell IP mix in storage, and we'll capitalize on the modernization of the traditional data center. In CSG, as Jeff just mentioned, we'll get back to driving scale and being structural share gainers across the company while accelerating both in attach and in profitability. Let's move to EPS. We expect non-GAAP diluted EPS growth of 15% +, up from 8% and almost doubled that of revenue. We have four key operational levers to generate the EPS growth. First, we're going to continue to drive durable revenue growth as we grow and take share in AI and across our core portfolio. Second, we'll increase gross profit, and we have clear opportunities to expand RAID in storage as we increase Dell IP mix and in AI as enterprises drive more meaningful adoption.
Third, we're investing in the company while being focused on simplifying, standardizing, and automating, and where possible, applying a little bit more intelligence to unlock more capacity, more productivity, and more efficiency. That's across the company, whether it's engineering, to sales, to operations, to services, and more. Lastly, our share repurchase program, which is both programmatic and opportunistic. Over the past five years, we've roughly doubled EPS, and with this EPS target, we expect to double EPS again. We have an operating model that generates strong cash flow. In fact, since 2021, we've averaged roughly $4.9 billion of annual adjusted free cash flow. This starts with revenue, leveraging our go-to-market engine to grow and to take share. Our focus on financial discipline is a true competitive advantage for us, whether that's pricing, leveraging our supply chain, realizing more cost efficiencies.
We are relentlessly focused on working capital and our truly differentiated negative cash conversion cycle. All in, we expect net income to adjusted free cash flow conversion to be 100% or better, and that sets us up nicely to talk about our capital allocation framework. We continue to target over 80% + of adjusted free cash flow to shareholders. If you look back on our track record since the inception of our dividend, we've outperformed that target, averaging roughly 100% return, which equates to more than $14.5 billion of adjusted free cash flow. We've achieved this with programmatic share repurchases, but also times of opportunism. We've seen these periods of price dislocation. In fact, if you remember back in Q1, we repurchased almost as many shares as we did in the entirety of FY 2025.
Since the start of that dividend, we've roughly repurchased $10.5 billion of shares, reducing the shares outstanding by over 80 million shares net of issuances. We've also executed on our dividend, returning 12% in FY 2024, 20% in FY 2025, and 18% this year. We remain committed to grow that dividend 10% + or more through FY 2030. This is an extension of our previous target, which ran through FY 2028. We remain committed to our investment grade rating and our 1.5x core ratio target. There's no change to our approach in M&A. We're going to remain focused on tuck-in IP creative opportunities that can accelerate our strategy. To recap, revenue growth 7 %- 9%, driven by strong AI growth, EPS target of 15% +, almost double that of revenue. We expect net income to adjusted free cash flow conversion to be 100% or better.
We'll take this strong cash flow and target returning over 80% of it to our shareholders. We'll achieve this through programmatic and opportunistic share repurchases along with our dividend. We've extended this commitment all the way through FY 2030. There you have it as we wrap this morning. All of the work we've done over the past four decades has merely been the foundation of what is now the AI era. This is driven by Michael's vision and our operating model. We have clear leading opportunities across our portfolio from client, traditional infrastructure, and AI. We have a go-to-market engine that provides us with unparalleled insights and relationships. We have a supply chain that is world-class. We have a services organization that can touch all corners of the world. We're going to take this model, augment it with AI, and make it even stronger and more differentiated.
All of this culminates in a business and a team that is focused, again, reminder, drive revenue growth, EPS growth faster than revenue, and generate strong cash flows. Our target is to return over 80% of that to our shareholders. Thank you. That completes our morning session. What we'll do now is take roughly a 10-minute break. Once we come back, I'll be delighted to welcome Michael, Jeff, and Arthur back on stage, and we'll conduct our Q&A session. Thank you again, and we'll see you again shortly. Thank you.
Please take your seats. The program will resume shortly.
All right. Welcome back. Hope everyone enjoyed the break. Just a bit of logistics here. We have two mic runners for Q&A. Please raise your hands. We'll get a mic to you. Very ready. We haven't quite started yet. All right. Please ask one concise question so we can get to as many of you as possible. Please state for the audience, please state your name and your firm. With that, let's bring up Michael, Jeff, Arthur, and David.
Right there. This one here.
Oh, good.
You flipped it on me.
Go ahead. Yes. Okay, first question.
Out of time.
Let's go with Simon.
Thanks a lot. Simon Leopold with Raymond James. Appreciate you taking the questions here. I'm happy with the EPS growth. 15% + is a good number. I want to get a better understanding of how we get there because we've been looking at the last couple of years, you've been getting good EPS growth, but it's by reducing your operating expenses. That doesn't seem like the most sustainable strategy going forward. How are we thinking about within that EPS outlook? How are we getting there in terms of margins? What are you assuming on buybacks? What are sort of the key inputs relative to what you've done over the past couple of years?
Yeah, I can start. First off, the anchor tenant of the EPS is obviously the revenue growth framework. The commitment in that 7% - 9% range is to drive durable revenue growth. You start there. You build on that with your consistency of our cash and our capital allocation framework. I mentioned earlier the 100% execution of that over the last number of years. We're committing as part of that to continue to drive 80% + of adjusted free cash flow back to our shareholders. That consistency through that model obviously aids the EPS growth tremendously as we do that. Like I mentioned earlier, two other operational levers. As we look at gross profit, we'll continue to add gross profit dollars. A couple of areas of rate opportunity, Dell IP storage, like we mentioned, and also as we see enterprises kind of drive that adoption. We'll continue to focus.
OpEx will continue to scale as a percentage of revenue. I think that's the beauty of the 15% + for agile elements to it that makes us really feel confident in terms of what that looks like.
Samik.
Thank you. Hi, Samik from JPMorgan. Maybe just on AI, you highlighted the differentiation that you have also in both in terms of technical expertise as well as getting to the market quickly. How should we think about when you are aligning yourself to the merchant GPU market, NVIDIAs, AMDs of the like? There's a parallel market in terms of doing custom builds for some of the hyperscalers. Given the technical expertise that you have, how do you think about the opportunity on that part of the market? What would it need currently from an R&D perspective for you to participate?
Make sure I understood the first part with regard to.
Addressing the hyperscalers.
Pardon?
Addressing the hyperscaler custom ASIC market relative to the merchant GPU market itself, how do you think about that opportunity? Maybe just to get a bit further along on the AI path, like you've talked about more mid-single-digit margins on your AI servers. As you get to the end of the long-term framework, how should we think about the margins there? Is it really a function of the mix between enterprises versus Tier 2s that drives that change? Thank you.
Yeah, multiple parts. Hyperscalers, we always answer the phone and we will engage. Today, primarily when you look at where we play is that tier two cloud service provider layer, the sovereign layer, and enterprises. Now, do hyperscalers use some of what I just described in fulfilling their infrastructure needs? Without question. Directly with a hyperscaler, to answer your question directly, that has not been an opportunity for us to date. The engineering complexity continues to grow. It goes up. In fact, it goes up pretty significantly if you recall the chart that I showed. I think there will be opportunities in the future. There aren't exactly today. We have done quite well in the tier two cloud service provider sovereigns. As you mentioned, over 3,000 enterprise customers, 6,700 unique customers in the pipeline. The business grows.
AMD, NVIDIA, for that matter, whomever comes up with a part that a customer wants, we have built the custom engineering expertise to deploy that. Big racks, little racks, complex racks, individual nodes that go in existing data centers. I think the engineering capability that we've built across the entire spectrum of this class of deployment, obviously, I think it's unmatched. I think it is differentiated to us in the marketplace and will continue to do so. It is an area back to the OpEx question. We're investing in this. Much of what Arthur described in terms of the cooling is an R&D investment area. These engineering pods, it's been an investment area. We'll continue to invest, and we can operate across that rich mix of customers in the mid-single digits that we've been talking about. We believe that is absolutely where we are. That's where we'll continue to stay.
We have opportunities with enterprise, and there's probably not much more to add to that. Say again? Stays in the same zip code that Mr. Kennedy described. Okay, which gives us, I think, maybe an important part that continues to allow us to build out a portfolio of broad customers.
Let's go with Wamsi.
Thank you, Wamsi Mohan, Bank of America. Thanks for the presentation today. If you look at your long-term framework, it looks like you'd roughly double your ISG revenues looking out five years. In that construct, if you think about the overwhelming majority of that, that's probably going to come from AI servers. Can you help us think through what assumptions are embedded within enterprise in that incremental $50 billion- $60 billion of revenue? Your margins you articulated are mid-single digit for AI servers today, but enterprise should push it higher. It'd be helpful to get some color around off that $50 billion- $60 billion. How much do you think enterprises could be, especially given you noted some real enterprise AI traction that you're seeing at your customer base? Thank you.
Multiple phases. Let's see. First, maybe let's get the elephant out in the room about where we think long-term ISG operating margins are within this framework. You should think of this framework that we're going to operate in 10%- 14%. That is really a byproduct of the anticipated AI mix that we believe we'll have in this long-term framework. We're very comfortable with our ability to hit that. Every quarter will vary. It's a long-term framework. It's an annual framework. The one quarter that it's 9 point something, no alarm flags should go up. It should be, that's just the balance of what the proportion of the businesses are. That 10% - 14%, we believe we can operate within.
It is reflective of what we think AI margins will be throughout the period of time and the opportunity we have with tier two cloud service providers and more sovereign in time. That balance of margin, while enterprise margins are better, we continue to see opportunity, as I hope we described, in the build-out of tier two and the sovereign opportunities. That's probably the best way to answer it. You're right. Much of the growth is on the AI side. We have our core ISG business growing at slightly above the marketplace to take share, and the balance of the growth comes from AI.
Let's go over here for a moment. Aaron.
Yeah, thanks. Aaron Rakers at Wells Fargo. I'm going to apologize for this first part, but the 15% CAGR, 7 %- 9%, can you just level set us? Is that fiscal 2026 to fiscal 2030? What's the timeframe that's being used here? My question is, you know, shifting over to storage, the storage market, the storage revenue for you guys has been roughly flat over the past handful of years. I think one of the things that I've heard you guys continually talk about is the Dell IP mix versus maybe the non-Dell IP mix, if you will. How do we think about that? Where are we at today? What are you assuming in the model? When does that maybe start to inflect where we actually see some growth in storage? Where do we stand on Project Lightning? I'm on there. Thanks.
I will take the first piece of that question. You should consider FY 2026 as the baseline for the model. The last guidance we give is on August 28th, and our framework runs through FY 2030. It is over the next four-year cycle based off that.
To the revenue question, as Jeff said, in our long-term growth framework, we're looking at share gain. Your question is, hey, trajectory hasn't been there. You know, what's changing? Over the last couple of years, we have worked really hard to do two very specific things. One, simplify the portfolio on the things that matter most to customers. You'll hear me talk over and over again around enabling private cloud infrastructure, enabling AI, enabling cyber resilience. We have done a much better job of focusing our R&D on those activities that matter to customers. Second is the transformation that we're running within the development community. In my prepared remarks, I talked about the fact that our mix of software developers that are focused on product delivery is up 18%.
I talked about the fact that we are now, the entire software development apparatus is united under one agile model, common CI/CD motion. This allows us to work a lot more effectively together, delivering features for customers. If you take a look at what we're doing on private cloud, PowerStore, when are we going to see growth? We've seen it for the last six quarters, the last five of which it's grown, double digits. AllFlash, when are we going to see that? We've seen it over the last several quarters. In Q2, we grew 25%, 25.7% to be exact, gained 244 basis points of share. In the areas that matter, we are starting to see growth.
Project Lightning, I said, is going to be, by the way, now that I'm thinking about it, I did not say this in my prepared remarks, so I'm going to get demerits for that later. Project Lightning will be the fastest parallel file system in the market with twice the level of throughput as our nearest competitor with 67% greater access. When you think about the tier zero sort of application that a lot of the tier twos or upper echelon of the enterprise are looking for, this is going to be perfect for them. You have the Dell Data Lakehouse, which is again the ultimate ingest engine for pipeline orchestration across all storage protocols with seamless lifecycle management. We augment all of this stuff with the Dell Automation Platform. We have really focused and streamlined the storage portfolio on things that matter.
We've revamped the development engine to deliver innovation to customers faster. That's why we have confidence and we've seen a couple of quarters of good progress.
Aaron, maybe to add color to that is we have the VxRail HCI headwind that I know that's what you're referring to. We'll continue to work with our customers. We're providing our Dell private cloud off-ramp for that with a disaggregated architecture that Arthur just talked about with our Dell Automation Platform that brings that manageability, simplification, ease of deployment to building private clouds with our traditional storage. We believe that continues to play out through calendar 2026 or fiscal 2027. As we get towards the end of fiscal 2027, you should begin to see the inversion, which I think is your question you're getting to, that the Dell IP portfolio can shine. That's what we're moving towards. Correct me on Lightning, in customers' hands by the end of the year?
Beta in the second half, in the customer's hand, GA at the beginning of next year. Yes, fiscal year.
Okay, Erik.
Hey guys, Erik Woodring, Morgan Stanley. Thanks for having us back again. I'd love Arthur to dig into the AI infrastructure opportunity outside of AI servers. We've talked about the opportunity for Attach a number of times. We're two years beyond the 2023 Analyst Day. I just love your kind of updated view on where there is opportunity for Attach in storage and services, where there isn't an opportunity for Attach in AI infrastructure with storage and services, and what that all means when we think about the broader opportunity behind what has clearly been a massive growth driver and will remain a massive growth driver for you guys.
Yeah, so good to see you. Thanks for the question, Erik. You know, we've been talking about being in the early innings for a while, and I think Michael said it in his prepared remarks, and Jeff hinted at it as well. The opportunity for the enterprise is significant. What we've learned over the course of the last two years is that deploying AI is not just a shift in technology, it's a shift in culture, it's a shift in mindset. If you think about what Jeff drove within Dell Technologies, it was all around bringing our processes together, standardizing, streamlining, standardizing, automating those processes to really get the value of the solution. Obviously, data is the fuel that feeds AI, and if your data is siloed, is dark, is not observable, you're not going to put into the engine the fuel that it needs to run.
Enterprise customers are running their POCs to really focus on compute, playing out, hey, what does this efficiency gain actually look like? As they move into production, as I said, there will be an incredible opportunity to attach the networking and the storage as a full Dell Technologies AI Factory. You think about what does a future data center look like? It is one where the silos are broken, the data is connected, everything is flowing seamlessly. Today, 50% of a typical enterprise's data is dark. That means they don't know what it is, right? There's another 30% that sits in cold storage in archive and backup. You can envision a world where all of that becomes observable, active, and sitting in hot and warm tiers constantly in circulation, feeding AI engines.
I think as more and more customers move out of POC into production, you'll see a greater opportunity for attach. I want to make clear that an enterprise doesn't wake up and say, I'm going to go buy an AI Factory, I'm going to go deploy it, I'm going to see results like this, right? There is work that they have to do inside of their enterprise in order to prepare to make that infrastructure useful. That effort is a lot more than people thought two years ago.
Let's go with Ben.
Thanks. How are you guys doing? Ben Reitzes from Melius Research. I wanted to ask just in general, Michael, now that you reflect and you have this structure of an AI business that's really catapulting the enterprise business and you're in PCs and you've benefited from scale. Now in the AI revolution, can you just talk about the benefits of keeping it all together, having PCs? Are you seeing the benefits of scale? Is there anything you think you could do with the portfolio long term as you look at this transition? Does it make sense to be in PCs still or any other changes that you might want to predict in the future just as you look at your portfolio in terms of the structure of the company?
Yeah, I think when we look at our business over time, it's clear that over the last decade, we have benefited from having everything all in one place with a large number of enterprise and commercial customers. I think the strength of our supply chain, the relationships with the component suppliers, all of that has benefited because of the scale. We don't see any change to that. As far as the portfolio goes, you know, as David mentioned, you may see smaller tuck-in kinds of things, but we don't see any massive opportunities outside of that. It continues to be a benefit to us to be able to provide customers a complete set of solutions across client, server, storage, and increasingly the networking that goes into data center from top rack. Of course, the services and financing goes all around that.
Let's go with Amit.
Perfect. Thanks a lot. Amit Daryanani of Evercore. I guess, you know, Michael and Jeff, you both have talked a fair bit about deploying AI internally. It's been a key lever for you guys to get all this OpEx savings and headcount reduction. Can you talk about how far along are you in that journey? How much more do you have to go in there? Is there an end state that you envision that could happen in? Maybe if I just extend this a little bit further, hopefully you guys will do better than the 7% - 9% growth you've outlined, at least from the AI data points. We'll say it's going to be better. At the same time, you want to cut a lot of OpEx out of the model, a lot of employees out of the model.
How do you ensure the guardrails are in place that things don't fall off the rail and you don't have operational execution issues? It seems like you're growing very fast and you're cutting OpEx at the same time, which is impressive and unique, but also scary to an extent.
Maybe I'll start, and then you can build on top of that.
Sure. First of all, we should start with the premise, not all that OPEX is created equal. The OPEX that's in the four distinct categories that differentiate us, our go-to-market engine, our end-to-end solutions, our supply chain and services are areas we've invested in. As we have discussed reducing our expense, and you can see it in the numbers, we've actually invested in more coverage. We've invested in engineering. We've invested in more platforming, if you will, to build a substrate across the company. We have put the efficiency challenges on the support functions across the company. We've been able to achieve that with tremendous productivity and efficiency out of some of the simplify, standardize, automate to start. Once you finish that, add intelligence, AI, and maybe to bridge to the other part of your question, we've done that reasonably well. We haven't even touched Agentic yet.
Now, do we have coding agents? Yes, we have some of those real-life implementations today. The broad notion of an autonomous agent working across the Dell substrate to do work that should be done by machines, not people, we are in the very, very early innings of that. We're not going to be ever done. That's not how we run the place. It's continuous improvement. What's next? Pleased, but never satisfied. We think there's opportunity. At the same time, you'll see us invest. It's in the framework that David described. Operating leverage as OPEX as a percentage will continue to go down. We will invest in where we need in the business as appropriate. I'm actually, I think I can say I'm not concerned about that. We've been long-term operators for a long time. We know how our model works.
We know what it takes to run our at-scale supply chain, to be able to service in 170 countries, to be able to have a go-to-market engine of tens of thousands of sellers. What I'm working on is how do I have less support of every seller so we can actually have everybody in sales be a seller? What we want is everybody in engineering to do engineering, not support work. Those are the opportunities and maybe the nuances that we don't describe enough that ultimately the number of engineers actually doing development is growing. The people supporting them is shrinking because we can be more efficient. The number of sellers is growing, but the number of people that support selling is becoming more efficient. Does that help? It's driving that level of rigor and discipline in the system.
For the foreseeable future, I see our ability to ultimately shift the OPEX profile. We will make a pretty dramatic shift over this course from what we used to spend in G&A to what we will spend in G&A in the future. You'll see our R&D expense continue to become more productive.
We have a well-established set of KPIs and metrics that help us understand what kind of resource level we need inside the business. I think the key point is that our people are becoming much more productive and capable, and we're able to scale our OpEx. As you think about the revenue growth that we're adding, if our people have better tools, we don't necessarily need to add people in order to achieve that revenue growth. We can make our people more productive and more efficient.
If an engineer can write 40% more code faster and is higher quality, that's more output.
I think there is a part of this where we have completely reimagined a number of these activities inside the business. That is a more complicated kind of multi-year effort that Jeff has been leading. We've made a lot of progress there. That involves getting all the data together, thinking deeply about the processes, and not being stuck in sort of, well, we used to do it this way five years ago or 10 years ago, so we're just going to make a better version of that. No, it's like, what can it look like given all these tools?
Let's go with Mark.
Hi, Mark Newman from Bernstein. Thanks for taking the question and great presentations today. Michael, just going back to earlier in the presentation today, you talked about the huge opportunity ahead in AI. I think there's no doubt that the opportunity ahead is huge. I guess just taking a step back a bit, a lot of investors continue to worry about short-term digestion concerns. I just wondered what did the executive team at Dell Technologies look at for signposts that the short-term is also still strong. Signposts that can give investors more confidence that there is no short-term digestion concerns, at least in the near-term horizon, to kind of dispel some of those fears. Related to that, the traditional enterprise market is, you know, as you said, really the bread and butter of Dell Technologies. It's great to see 3,000 enterprises engaged with Dell Technologies for AI servers.
Any kind of analysis you've done on what percentage of AI workloads in the future will be on-prem versus in the cloud to give us some kind of understanding of where this market is going longer term. Thanks very much.
Arthur, you want to address the first part of that?
I will. You know, you think about how much we shipped in Q2, and it really didn't even dent our next five-quarter pipeline. The level of engagements that we have with customers continues to go up and go up significantly when we take a look at the next five-quarter pipeline. I spend a fair amount of time with some of the largest customers as well as with some of these enterprise customers. They have very ambitious goals, and they have very, very ambitious timelines. If you look at how fast we are being asked to design, deliver, integrate, and bring up massive clusters, it's something that if you would have asked me three years ago, can you do this? I would have said not on God's green earth. That's how fast things are moving. We have a really good pulse.
As you can imagine, we're involved in every single large deal that you can think of and ones that you can't think of. We have a pretty good idea of the sort of the pulse of the demand, you know, that we see over the next, say, call it, you know, 12- 18 months. There is no slowdown that I can see.
That was the first part of the question.
Yeah, I think in general, you know, in the infrastructure space, there have been periods where there's been kind of a digestion cycle. If we look at what's going on with the incredible growth in tokens, we don't see any signs of that. As Arthur said, the requests and the demand signal that we're seeing across a wide range of customers suggests that there's still a lot more demand than supply, and it doesn't seem to be slipping. Now, you know, will there be digestion periods in the future? I'm sure there will be, right? We just.
Yeah, no, that's where I was going to go. I mean, you asked for signs. During digestion periods, what happens? Orders are canceled, projects are delayed, spending slows down. None of those three exist today in this space. In fact, it's just the opposite. We need more faster. Our anticipated computational needs over the next handful of quarters, Arthur and I look at one of them and go, oh my gosh. We have to get a supply chain ready to meet that. We do not see those signs that have historically been there. When you see the economy change, you see PCs, you see CSG stop buying, you know the pattern. When you see after a buildup of infrastructure for three years, you see a pattern. Those patterns with their signals are nonexistent today. As Michael said, that doesn't say they don't come, but they don't exist today.
Yeah, I think it's important just to add on to that. There is a significant demand. That demand is not always linear. I mean, we've been saying this for like two and a half years, and I want to emphasize the point that, you know, while we work on these deals, design, there's a lot having to do with technology readiness, with factory readiness. It's not like the demand is linear and you can kind of see growth from quarter-over-quarter. As we look at the next five-quarter pipeline, as we look at how serious customers are about what their aspirations are, I think there's a lot of opportunity over the next 12 - 18 months.
The numbers say, right? Last year we sold $11 billion worth of AI servers, shipped roughly 10. First half of this year, I think we've sold $17.7 billion and shipped 10. First half.
On the last part of your question with the enterprise versus cloud mix, I don't think anyone knows the answer to this, but a couple of things we're seeing. First is I think the buyers are more experienced, having gone through the sort of original cloud activity. They kind of, everybody loves the public cloud, right? Until they get the bill, right? They've got the bill and they've understood that, yeah, it kind of works for some workloads, but not all workloads. The bigger, more sophisticated ones certainly like the arbitrage of multi-cloud and on-prem and colo. That's where we're seeing a lot of activity with these 3,000 AI factories. The forward trajectory there looks quite promising, but I don't think anybody knows sort of how much the whole thing is going to grow. Certainly, we're in a great position to be able to capture a large portion of.
Maybe a couple of reinforcing factors. One is we think it's hybrid. This is no doubt a hybrid world. The data is on-prem. The data is being created at the edge. We see a continuum. There'll be hyperscaler public cloud AI. It'll be on-prem in a data center. It'll be out at the edge in a factory. It'll be out at the far edge on a PC. Which is why, back to the question that Ben asked, you look at the continuum of AI, it's making its way to the PC. From the large-scale clusters out to the edge, that continuum of AI we believe exists. You'll see AI, or you'll see it hybrid. Quite frankly, we think AI follows the data. Where the data is created is the most efficient place to do the computation.
Yeah, can I add on to that?
Sure.
You know, another point I would want to make there is, you know, we talk a lot about the modernizing of data centers. What I think about a lot is, you know, traditionally data centers, and not to offend any CIO, you know, largely thought of as a cost center, right? Paid to keep the lights on. When you think about something as a cost center, what do you think about? You think about how am I going to optimize this cost? When you think about data centers or all infrastructure in the future, it's now a value center because it's housing your most valuable asset. It's housing your data. It's housing your AI. Customers are starting to think a lot differently about what their data centers need to look like. They're starting to shift from, hey, I used to think about this as a cost center.
Now I need to be thinking about this as a value center. When you start thinking about it as a value center, now you're not thinking about how to optimize it. You're thinking about how do I invest in it because this is actually driving real value for the organization.
Maybe to wrap just one more piece of it, just from a financial perspective. From my perspective, you mentioned some of those gating factors. I expect this business to be margin dollar accretive, right? Jeff mentioned earlier, you know, mid-single digits in terms of operating income. That's non-negotiable. That's our commitment as part of the framework. We'll have control points kind of tied to that. We're creating value across the enterprise.
Let's go up front with Asiya.
Thank you, Asiya from Citi. Just David, a question. You know, investors often ask about free cash flow. There's this view that the growth of AI, you know, is heavy on free cash flow generation just because of networking capital. You clearly, you know, talked about committing to your free cash flow conversion today. Maybe help us understand how you're kind of managing that because the view again within investors is that it's AI business growth, you know, is networking capital intense.
Yeah, for sure. Like we said, a true differentiator for us has been for, you know, the 40 years of the company. Look, as you stare at AI growth, you take it in the piece parts, right? Jeff mentioned earlier our CSG business, truly capital efficient as we drive through that. I would probably layer on top of that our core server business as part of that as well. Obviously, storage and AI are less capital efficient, but are still efficient in total. We also have the opportunity, because of the strength of our balance sheet, to be able to, you know, invest from time to time in the inventory required for AI. When you look over the lifetime of the framework, that's why we felt comfortable with the net income to adjusted free cash flow of 100% or better.
We see it continuing all the way through the life of the framework to be that differentiator for us.
Okay. Let's go in the middle here in the back for Mike.
Thank you, Mike Ng from Goldman Sachs. I wanted to ask about Dell Technologies' competitive advantage in AI, which I know you commented on earlier. You know, relative to other OEMs, are there one or two things that consistently stand out in the pursuit of AI cloud deals? Given the potential for kind of U.S. public sector workloads in AI, does being a U.S. company help in some sort of way as well? Thank you.
I'll take the first part of that. I hope I communicated. I'll give it a go again. What stands out is our engineering expertise, the time to design, the rapid scale deployment to get that gear once the design is done, on the dock at the customer. When it shows up, it works. It's in the data center. It works 99%+ of the time. We cover that with installation and support services, to deploy it, to install it, and to keep it up and running. That consistently has differentiated us in the marketplace against all OEMs and ODMs in the marketplace. That's what we'll continue to invest in. That continues to be something our customers come time and time back to us for. Mind you, we're never the lowest price guy. We're driving differentiation with those items that I described.
You add financing for those customers that need the financing help or support. That comprehensive portfolio or package of capabilities is what has consistently differentiated us from the very first day. Arthur and the team have done nothing except double down and put more capability in place.
Yeah, and that's consistent with what we've heard from the customers. On the U.S. government question, we don't expect any orders today because of the government shutdown. I think at least it was when we started the meeting. You know, the gap between the private sector AI capabilities and what you find in the national labs and Department of Defense and other intelligence services has kind of never been greater. This has been realized. There are a number of efforts underway to close that gap. Certainly, you know, we have had
A great relationship with the government as a customer over many, many decades, and we're in a great position, based on the opportunities that we're engaged in and hearing about.
Let's go up front again with Tim.
Thank you. Tim Long at Barclays. Just wanted to touch on AI servers, maybe one and one B. Really interesting data on the efficiency of the new 16G, 17G servers and how old the install base is. I feel like historically this vertical has seen a few really good years, a few really bad years, and we're kind of flattening out. Do you think there's anything about the refresh opportunity ahead with this dynamic and maybe AI and the need to maybe upgrade more than in prior cycles? What's the tail of that dynamic? Really quickly, you mentioned, I thought it was very interesting on the AI enterprise servers, that it would be new budget from enterprises. Just curious if there's any proof points for that, because I think generally if enterprises spend X on servers, they're going to spend X on servers and change the mix.
Any color on how, you know, do you think AI would be incremental to that bucket? Thank you.
Sure. If you start with, and Arthur hits this in his presentation, I know he'll add to this, there's a consolidation opportunity. There is a large number of old servers deployed. Customers are looking for space and power for AI, and you have an opportunity to create that floor space, power, and cooling capacity by taking an old server at 16G 3 to 5 to 1 on 7G 5 to 7 to 1 conversion rates of taking advantage of more cores, more power-efficient cores, bigger memory arrays to consolidate and free up space, power, and cooling for other stuff. We've been seeing that. That continues. We believe that's the opportunity. We're seeing AI, or we're seeing IT budgets shift to AI. I would think, and Arthur will add to this, but customers don't, I thought Arthur hit it correctly, is this is not an IT project.
When, here's how AI is getting deployed at scale by successful companies. It's not an IT project. It's a business imperative. When it's an IT project, it struggles. If it's a business imperative, if I could give you 40% productivity, wouldn't you give me a couple of dollars to go get it? That's how CEOs, how boardrooms are talking about AI as being disruptive and a game changer, and why you see that as if you can give proof points and there's a return on investment, you can get incremental dollars to provide that.
You answered it, Jeff.
Yeah, on the PowerPoint, I would say, you know, if you think about certain markets, like in Europe, where the cost of power has gone up dramatically, the savings are tremendous. That has fueled a faster than normal refresh because, you know, I would say faster than normal growth rates in Europe because the ROI in moving to 17G with the lower power consumption is dramatic. Think about countries where the cost of power has gone up significantly.
Let's go over here again, David.
Great, thanks. David Vogt at UBS. I just wanted to go back to the sovereign and enterprise opportunity. When you talk to the 3,000 customers that have deployed AI Factory and potential new customers, what are the governors or what are the gating factors that are holding them back? Meaning, why can't they move faster? What is the issue? Is it data storage? Is it complexity, compliance? I'll give the second question at the same time. When you think about, in the past, there's been opportunity to be competitive on big deals, signature deals. I think Arthur mentioned that you're going to be involved in pretty much any deal that comes to market.
Is there anything in that enterprise vertical or sovereign vertical that jumps out at you as these are landmark transactions, landmark deals that really maybe change the competitive dynamic or competitive intensity of those types of potential deals that you may target going forward? Thanks.
Maybe to answer your question, I'm not worried about the 3,000 customers. They're already started. I worry about the tens of thousands of other customers that haven't started. Those 3,000 are doing their pilots, they're moving into production, they are starting with a use case, and they add another use case. They see return. That momentum is going. We're seeing repeat buyers. We're seeing the growth of our enterprise portion of AI grow. We've talked about that. It's how do we get the others moving? It really is this notion about where do I start? Where's my darn data? Can I do anything with my data? How do I pick a use case? Ultimately, can I get a return on the use case that I pick? Our professional services, along with our partners' professional services, are helping customers work their way through that.
That's the biggest inertia to plow through because once you have found a use case and you get whatever the productivity you charted out to do, you're a believer quickly. This notion that I described earlier, where AI is being driven by the business, works at a much greater rate than if it's being an IT project. Hey, we had AI, it's out there, help yourself, pick a model. Or it's being driven by the R&D leader. I met with a large manufacturing company two weeks ago, their entire engineering leadership. We talked about coding assistants and how they could use coding assistants and the results that we've seen inside our company. It's holy smokes. I got to get me some of that. How'd you do it? We started with this developer pool. We got that developer pool. We built momentum and then did, did, did, did, did, did.
It's teaching and it's getting that practice out that we find is the biggest opportunity. It's why our services and our partner services organizations are helping customers navigate that. That's, in my mind, for enterprise, and then we talked about it. Once you now deploy a node with a GPU or four GPUs or eight GPUs, and you begin to see it, you can start talking about networking. You can talk about storage. For example, our next best action started with storage all over the place. It now works on our PowerScale and ObjectScale unstructured assets. We're getting incredible performance. It's driving productivity for our field service organization. Real life example, we are teaching customers about that.
That's where the opportunity, back to Erik's question about drag and around enterprise, once you get a proof point, you begin to hook up data and networking and services is where you get the more revenue around each and every AI server. Does that help? Competitive intensity. This is kind of a big category. Everybody shows up. Particularly in the big deals, everybody shows up. OEMs and ODMs. In enterprise specifically, the relationships that we have with our business over many, many years serve us well. We serve small business, medium business, public institutions, large corporations, and multinationals with a very large sales force and partner network. I think that reach is absolutely an advantage, the service capability that goes along with it, and a reputation of putting high-quality gear into their infrastructure.
Yeah, as you heard about our market-leading positions, we're generally the incumbent in enterprise and commercial. I think the other point is that there is a varying rate of sense of urgency across companies. We've approached this with a high sense of urgency. I think it's not there across all industries, but as the use cases become more apparent and as it becomes clear what can be done in R&D, in sales, in customer service, in key functions, then it becomes driven from the business line executives, from the CEO, from the board. What are you doing to take advantage of this great capability?
Okay, let's go up front here. I'm sorry, I can't see with the lights. We're next to Simon.
Yes, thank you. It's Mehdi Hosseini, Susquehanna International. You talked about the ISG up at the margin target of 10% - 14%. What's the underlying assumption for the storage mix? As a follow-up to it, this event is mostly focused on AI compute accelerator. Jeff, you talked about architectural changes that need to happen to storage. I haven't really heard much details about your storage. You briefly talked about portfolio. I'm asking this because if storage is going to be a key mix behind the ISG target, operating margin target of 10% - 14%, and if data is going to be the key, what's going to enable inferencing, I don't hear of a strategy. How are you going to go and procure the key components? Everybody's focused on component compute accelerator. Hard disk drive vendors are talking about two years of sold out, two years of backlog.
I don't hear you talking about securing components. As you know, these are semiconductor manufacturing, long, long lead times, and they're coming out of a two, three years of a deep recession. I don't hear them rushing to add clean room capacity. What are you going to do about it? If storage is going to be a key component part of your ISG?
I think that's three or four questions. Let me work my way through.
They're all related.
We occasionally buy some ingredients for storage. I'm pretty sure.
Yeah.
First, in the guidance of 10% - 14% operating margins, the implied storage growth is slightly ahead of the storage market, taking share. Revenue growth ahead of the market, which is mid-single digits, you should expect our storage load. It's actually low to mid-single digits, depending on which storage category. You should expect us to outperform that in revenue growth. What we've communicated consistently is, as our portfolio shifts to more Dell IP, the margin profile of the storage business improves. We expect that and model that over this timeframe. Correct me if I go astray. I think that's the first part of your question. The second part of your question around storage, perhaps we weren't clear enough. We are the storage leader in the market. Period. Larger than number.
Two and three combined.
Two and three combined. We have a large footprint from block assets, file assets, unstructured assets, data protection assets. If you step back, the storage strategy is really around three pillars. The first pillar is helping customers build private clouds. Think of that as traditional storage. Think of that as taking our block assets, helping customers where we have a leadership position, and helping customers easily implement that. Arthur and I both made references to the Dell Automation Platform that's tied to the Dell Private Cloud that allows us to take the ease of use of what we created in HCI and bring live to customers deploying large-scale traditional storage. The second component of our storage strategy is the Dell AI Storage. If you think about that, we try to describe that as our unstructured assets because a lot of this data is unstructured, 80%, growing 55% annually.
All of this data is coming at us. We take our at-scale PowerScale and ObjectScale assets that are known for their performance, flexibility, and manageability in the marketplace. That's the foundation. We add Project Lightning, which is our parallel file system. We add Project Dynamo, which is the KV cache to drive inferencing, to improve inferencing performance. We put the Dell Data Lakehouse around that, which helps ingest data into AI systems. We package that up as the AI answer for storage. The third component of our storage is around data protection and cyber resiliency. Three pillars of our storage strategy: more of a traditional approach, but making it much easier and helping customers build private cloud on-prem, an AI component which will have the assets that I just described, and then third, around cyber resilience and data protection. We are a leader across the board there.
We're investing in the R&D. Many of the coding assistants and knowledge assistants that I talked about continue to help accelerate the delivery of features and capability. The last part of your question, I think our supply chain does okay. I think we know how to operate in when there's lots of supply, when there's no supply, and everything in between. We have long, long, long relationships with the disk drive manufacturers. I met one of the CEOs of one of them just last week, talking about what's happening. I think we're in pretty good shape when it comes to DRAM, NAND, and spindles.
To that question, it sort of relates to one of the earlier questions that was asked about the portfolio. Having a number one PC business in revenue, number one in servers, and number one in storage, obviously our consumption of NAND, DRAM, and disk drive specifically is among the largest in the world. That scale plus the long-term relationships, we feel very comfortable in our ability to secure the supply we need.
Okay, let's do one more, and then we'll ask Michael Dell to close with Vijay.
Thanks, Vijay at Mizuho. Just a quick question on the back to the ISG side. I think this fiscal 2025-2026, you guys grew ISG like 25%-30% year- on- year. I know you're guiding to 11%-15% going forward. Are you being conservative there? As you look out to fiscal 2030, I wonder what the mix of AI servers to storage and networking is. What are you modeling internally, I guess?
Thanks. I'll start maybe just to reiterate the framework again. I think Jeff touched on it briefly. Within ISG, we expect our core businesses, core server, and storage in that mid-single digit growth, which will be at or slightly ahead of the market. Do the math within that, you get to an AI number that's between 20%-25%. Lots of different opinions of how big the AI TAM is, right, as you kind of go through that. Arthur mentioned it earlier, we are in all the conversations when these deals pop up across CSPs, sovereign, and the enterprise. If in your scenario analysis, it's a bigger market, we're there to play. We're there to add accretive margin dollars, which is accretive to our op-inc, which contributes to EPS. We'll be there if it's bigger, but we think we have a solid framework up through FY 2030 right now.
The second part of your question, I don't remember.
What does the mix of AI servers and storage?
He just said it.
Yeah, we can. Sorry.
He addressed it.
Okay, let's close out. Thanks for everyone's questions. Michael, to you.
All right. Thank you all very much for being with us today. If we reflect on the past two years, it's clear the pace of change and innovation has been unprecedented. There's a pretty good chance we could be saying the same thing two years from now as the cycle continues to accelerate. At the core of that, of course, is AI and data and infrastructure are advancing at an exponential rate. Dell Technologies finds itself at the center of that transformation. We have raised our targets across the board, now targeting 15%+ EPS growth. We're committed to generating substantial free cash flows, as we discussed, with the majority of that to be returned to our shareholders. Our strategy remains the same, and our track record of value creation over the past four decades speaks for itself. Looking ahead, we're even more excited about the opportunities that we see.
Thank you again for joining us today.