Okay. All right, good. Sorry about that, everyone. Well, hello and good morning to you all. Welcome to our View From the Top CEO Call Series. I'm honored to welcome back founder, chairman, and CEO of Dell Technologies, Michael Dell, on this View From the Top Series Call. Michael's joining us again for the third time, so we're especially privileged to have him on this call. Before we get started, I do need to mention that conflict disclosures as related to the individual companies or securities discussed on the call today can be found on the call invitation. Additionally, this presentation contains forward-looking statements based on Dell Technologies' current expectations. These statements involve risk and uncertainties that could cause actual results to differ materially. Factors that could cause results to differ are discussed in Dell Technologies' periodic reports on Forms 10-K or 10-Q filed with the SEC.
Any forward-looking statements made today are based on assumptions as of today, and Dell Technologies undertakes no obligation to update them. We have a lot of ground to cover here today. We're going to talk through hopefully a broad breadth of topics, including AI, memory, PCs, cap allocation, lots of exciting stuff to cover here with Michael. As you all know, and Michael really needs no introduction, but Michael really started and founded Dell Technologies with $1,000 in 1984 at the age of 19. He became the youngest CEO ever to earn a ranking on Fortune 500, has navigated many cycles, lots of portfolio changes, including the acquisition of EMC, the go private transaction back in 2013, spinning out VMware in 2021, Perot Systems, buying and selling somewhere in there, and now re-architecting the company for the age of AI.
Beyond the success in business, Michael, alongside his wife, Susan, founded the Michael & Susan Dell Foundation in 1999 based on really expanding opportunity through initiatives spanning education, health, and family economic stability. Most recently, I'm sure you've all seen, the foundation announced a $6.25 billion, that's with a B, billion-dollar philanthropic commitment to help seed investment accounts for 25 million U.S. children. That was incredibly generous, Michael, and we hope many others will follow your lead here to lead this next generation to success. That's just really incredible. We're really fortunate to have you here. Welcome, Michael. We really appreciate you taking the time to be with us here today.
Well, thank you, Wamsi. Great to be with you. Thank you for the kind introduction. Happy to discuss all this.
Well, thank you so much, Michael. To get started, you have reinvented Dell from PCs and peripherals to server storage networking and now AI. You seem pretty all-in on AI now with revenues growing to a third of the company in just three years. How do you see AI changing the tech landscape, and what do you see as Dell's opportunity here?
Yeah, I think you have to step back and understand that we're shifting from calculating and computing to thinking and intelligence, which is just a fundamental change. The value of that and the opportunity for that is enormous. Certainly, if you look at our AI server business, went from $2 billion to $10 billion to $25 billion. We're expecting $50 billion this year. We're still in the steep part of the S-curve adoption of the technology. We have 4,000+ customers using these AI factories. Look, I think if you think about this thinking intelligence platform, you've got $114 trillion economy. If you get a little bit of productivity improvement with AI tools, it's worth an enormous amount. While there's a ton of investing going on here, it's at least possible that the world is under-investing in this given its potential.
It turns out that you need a lot of what Dell Technologies has built over 42 years in terms of infrastructure, capability, server storage, networking, and the support and services, and supply chain, et cetera, to be able to deliver that. I think maybe 10% of customers, maybe 15% sort of understand what this can do, and the rest of them are still figuring it out. It takes time for all this to happen in the real world. I also think there are distinct phases of it. There's the tools phase, which is what most people are doing. They're adding a tool here and there, and it helps people do their job and be more productive. That's great. There's a whole another phase of this, which is really reimagining the workflows in companies, and this is very, very different than tools, right?
It involves really rethinking how you get to a given outcome. In many cases, this is just a big change inside companies. Again, doesn't happen quickly, not easily, but you go from, let's say, a 10% or 20% or 30% improvement with tools to a 10x or 20x or 30x improvement, and there's certainly signs of hundreds of x improvement in various processes and outcomes where you're just doing it very differently with agents and recursive self-improvement and all of that. Super exciting time, and certainly there's a lot of demand for what we do.
Yeah. No, that's incredible. Many things to touch on over there. Maybe let's start with the comment you made about potentially under-investing. A lot of investors are worried about hyperscaler CapEx budgets that probably exceeded anyone's forecast from two or three years ago. These companies have gone from a capital light to capital heavy model, which is pressuring their own cash flows at the moment. How do you see the sustainability of this CapEx cycle and how do you position Dell if the spending on AI was to either accelerate or decelerate?
Well, it's certainly not decelerating, I can tell you that. We took $64 billion in orders in the past year. The opportunity and the pipeline keeps growing. To be clear, we're not putting it on our balance sheet. We're still operating with a capital light model. When I talk to the hyperscalers, each of them view this as an existential issue for their business, and so they're all investing super aggressively. Again, I go back to the size of the services economy and the demand. What I can tell you is that when we deploy this infrastructure, Time to First Token is incredibly important because they're putting the infrastructure to work immediately, and it gets consumed. There is just a ton of demand here.
I still think we're in the early stages of the adoption of all this, and we haven't even gotten to agents and the physical AI and recursive self-improvement. Certainly there are going to be ups and downs here, but we feel we're well-positioned. We don't actually make any commitments until we have actual orders. I think we've been pretty careful in managing our sort of capital commitments and demand supply to make sure that we're ultimately converting this into free cash flow, which is what we focus on every day.
Yeah. No, that's very clear. In some ways, this unprecedented CapEx and demand for AI data center build-out. It's just creating a lot of tightness across the supply chain, whether it be from labor to cooling equipment to memory, just to mention a few. Where do you see the biggest bottlenecks, and what are some of the opportunities and risks associated with these from Dell's standpoint?
We kind of love it when there's a supply chain challenge because that's kind of like the Super Bowl for us, you know. We're ready for that. You have to understand that while you can easily identify, let's say, 10 or 20 suppliers that work with Dell, there's actually thousands and thousands of them, right. It's a complex thing. We have built a supply chain machine, and it's built on relationships. Obviously, we took our guidance up a lot. We have the supply for our guidance. We're out looking for more supply. If you just get back to what's going on here, there is extraordinary demand growth across the industry, and so you've got all sorts of constraints. One way I would describe the memory issue is, I would say memory and advanced silicon, it's kind of the 25 × 25.
What do I mean by that? When NVIDIA came out with the H100, it had 80 GB of High Bandwidth Memory. The current part has 288 GB. Next year, you'll hear about parts with 1 TB, and the year after that, you'll hear about parts with 2 TB. From 80 GB to 2 TB is 25 x more memory per accelerator. Okay? In that timeframe, you'll have about 25 times more accelerators. 25 x 25 is 625 x. I know most of you are pretty good at math. It also takes about four years to build a new memory plant, assuming you don't have shells already built. If you wanted to have memory capacity in 2027, you would have made the investments in 2023.
Now, if we dial back the clock to 2023, you might recall it was a horrible year for semiconductors, particularly memory. Micron in particular had negative gross margin percentage. Their sales went in half from 2022 to 2023, and the industry collectively lost $40 billion. A lot of them still have PTSD from that. They're like scared little puppies. Some of them almost went bankrupt, and so they're sort of very careful about investing. If you go to the logic side, TSMC didn't increase its CapEx in 2023 from 2022, didn't increase it in 2024. In 2025, they started to increase it, but not enough. They're pretty conservative, and they're sold out. Yeah, there's big supply constraints. The good news is that we are not a monoline company. We can sort of move wafers around across multiple product families. Again, we've had relationships.
I mean, Jeff Clarke's relationships and my relationships with these companies go back literally to the 1980s. It turns out a lot of these companies are quite relationship-oriented, particularly some of the Asian ones. We feel that the environment advantages us. Of course, scale. If you look at our scale, our server and storage business is 4x larger than any single competitor generally. We're larger than number two, number three, number four all combined together. We've also gotten really good in terms of our reaction time in dealing with the changes in cost. Customers also know that we have a supply chain that works. Now, nobody likes it when the price goes up, but even worse is if you can't get supply. We feel we're well-positioned for this kind of environment.
No, that's great. Yeah, you guys have demonstrated time and time again, when there is supply chain disruption, you guys just somehow manage to out-navigate everyone else, and that's just a testament to a very strong supply chain legacy that you have built at the company. I want to come back to memory, Michael, but staying on AI, your AI revenues have just grown extremely fast, right? From almost nothing to $50 billion in just a few years. Most of this is Tier 2 CSPs. When you talk to boards and CEOs, CFOs, how are they framing the ROI for AI? When do you see enterprises start to maybe more broadly leverage AI and start to hit an inflection point? Really maybe rope into that, how does this change the margin and ROIC profile for Dell as you think about those moving pieces?
Well, I would say we're seeing it, Wamsi. I mean, there's substantial uptake in enterprise, and it's continuing to grow. Certainly, the margin profiles will be more attractive than the CSPs, because they're smaller deals. It tends to be more services and storage and networking. The lowest cost token is the one that's generated closest to where the data is. Enterprises have figured out that it's a hybrid world, and these tokens are pretty expensive. Again, we haven't even gotten to the whole agent activity, but inference is definitely taking off, and a lot of the use cases are not super complicated, and a lot of the smaller models or open models work super effectively. You probably know that the open models are not that far behind, maybe six months behind, depending on who you ask. We see robust growth in the enterprise.
Look, I think companies are going through an understanding here where, I would say there's sort of different kinds of companies, right? There's companies that have said, "Wow, this stuff is total game changer, and we better do this, or we're going to have a big problem if our competitors do it, and we don't." Then there's other companies or organizations that say, "Well, okay, this is our budget, and it doesn't really matter what's happening in the outside world. We're sticking with our budget." Okay? I mean, that'll kind of work until it doesn't work, but I think over time, there's going to be fewer of those if you really believe that this is a game changer. Put me down for that. We see it in our own organization. We see it in our own productivity.
If you look at our own ROI, if you want to measure free cash flow per person or revenue per person or gross margin per person or however you want to measure it, there's a ton of ROI here. We're, I believe, still at the beginning of this. Also, you need to think about in the past, right, each of us processed data and tasks and we had projects and tasks and emails and things, and we sort of do those at human speed and we pass them off to the next person. Organizations and technology that are in organizations are a function of what was available at the time they were established, okay? They get updated from time to time. Now fast-forward to, let's say 2027. We have these agents that are able to process all these tasks, and they work way faster than humans.
They're way more accurate. They never sleep. You can supervise tens or hundreds of those agents and fundamentally change the way work is done. There's just a whole rethinking and reimagining going on inside companies. That will happen at very different speeds, right? Not every organization is going to be AI-pilled and just throw out everything and redo their company immediately, kind of like we are. I think you'll see more and more of that over time. You'll see a stark contrast in the performance of companies based on the rate and pace that they do this. We're kind of already seeing that.
Yeah. No, it seems like the deluge of usage of agents is just starting now. We're starting to explore what these agents could do, and especially after OpenAI and other recent developments, there's obviously a lot more focus on it.
Yeah. That's right.
How do you think?
We've gone from LLMs to reasoning to agents, and now we have this recursive self-improvement. If you think it's going to end there, you would be sadly mistaken. It's going to keep going. A way to think about it is we have this platform for thinking and intelligence, and you're going to see a significant number of innovations on top of that. Can somebody predict exactly what those are going to be? Not a chance. Hold on, it's going to be exciting.
Yeah. No, that sounds very much like, I think, when we first entered either the cloud era or the internet or cloud or mobile, like each of these, we did not know this whole ecosystems that would form on top of that, which kicked off many multimillion-dollar businesses. Seems like there is a lot of innovation yet in store. You guys are going to be obviously part of that. Maybe, Michael, to switch gears a little bit, unfortunately, the world is a little bit in a tumultuous place with the Middle East turmoil. Do you see any sovereign build-outs that are potentially slowing? Or do you think that now with maybe more of each country looking to secure its own infrastructure, there's more of an urgency to become self-reliant, and actually the demand is becoming even stronger from a sovereign perspective?
Well, geopolitical tension is sort of the root cause for sovereign AI demand. Let's suppose there's a scenario where there's tension in the transatlantic partnership. Who knows? Maybe that would be a thing. Well, all these countries in Europe, they don't want to be reliant on U.S. AI or U.S. anything, actually. Now we remind them that, well, actually, we can't make these things without ASML, and there's all these companies in Japan that make all sorts of gases and chemicals, and South Korea and Taiwan, and it's not just the U.S. Dear customers, we're dependent on the world to make these things. It's not just the U.S. involved, but we have extensive factories in Europe. Yeah, I think sovereign demand continues to grow.
Some really good partnerships like with Palantir, and customers are looking for the ability to run AI inside their country with their own private data for all sorts of applications. We've had some great wins there. Think of it as any country in the top 25 of GDP. There's some sovereign thing going on there, whether it's the government themselves or a telco or separate company that is somehow affiliated or connected back to the government. There's definitely demand for sovereign AI.
Okay. No, that makes a lot of sense. Maybe Michael, just looking at your AI server business, historically Dell has had fairly significant negative cash conversion cycle, and you kind of engineered that whole concept across the whole PC industry to start with. But as you think about AI becoming such a sizable part, almost as large as the PC business, probably going to exceed it very soon here, how do you think about whether or not it's important to have this negative cash conversion cycle across your company? Or does it matter in your long-term calculus on how you manage this business as you think about maybe the different sort of capital requirements associated, working cap requirements associated with the business?
It absolutely matters, and we're super focused on cash flow. Last quarter, our cash conversion cycle was flat sequentially. We're taking all the best practices that we have and applying that to the AI business, and we're, as I said, pretty careful. We don't buy material till we have a PO from customers. There's some CapEx on our part, but it's pretty limited. We feel very good about our ability to generate strong cash in this environment.
Okay. Amazing. One of the questions we get often asked is just around the competitive differentiation in the AI market. You guys have just really taken this to a new level. I mean, you're sort of beyond what anyone could have expected. The amount of revenue acceleration that you've shown, your order book, you've just had very strong performance. What is it that's driving that differentiation? I know Jeff sometimes, you talk about Level 11/12 and deployment in sort of data centers. You just mentioned Time to First Token. It just seems as though you're doing something different. There is something that stands you apart from a lot of your competitors who are going through several issues of their own, not just execution issues, but other issues beyond that.
As you think about, A, where your differentiation is, and B, what are some of the opportunities given maybe some of the missteps and other issues that are there with other companies? How do you think about that?
Yeah, I would say, we keep getting repeat orders from the same customers, and we keep winning over the customers that, let's say, were earlier reluctant or had decided to go with somebody else. There could be many reasons for that. I don't think it's just one thing. Certainly, we would start with our engineering. We actually do it, and lots of it. It's not just in the compute space. Obviously, storage is a big element of what we do, and we now have our Lightning File System. Our PowerScale and ObjectScale are doing super well. Building all these systems to be reliable, the networking, then you get into the deployment installation. We learned a long time ago that if you just ship these things to the customer, bad things happen. We show up with an army of people and deploy them and install them.
By the way, we get paid for that. Then you have service and support, incredibly important, and these things are deployed all sorts of places around the world, wherever there's power. We have broad ecosystem of partners. Obviously, our supply chain, DFS comes into play. Look, we've been first to market now with the GB200, with the GB300, and I think the proof is in our consistent execution and customers coming back for more. Jeff Clarke and I at GTC, we had a dinner with a bunch of the neo clouds and CSPs. Maybe all of them actually were there. They said it was the first time they were all at the same place at one time. The demand from those customers is very strong, and they're happy. They're happy with what they're getting.
I spoke with one of the largest ones this morning, and their business is very strong. Look, they know they can rely on us. I mentioned Time to First Token. We build these things ahead of time. These things are unbelievably complicated. We've sort of perfected the precision logistics and supply chain engineering, where we can deliver hundreds of these racks in a given week like clockwork and have them show up, and within 24 or 36 hours, they're up and running, and they're generating money for the customer. Our competitors don't seem to be able to do that reliably. I would also tell you that Jensen, at his various performances, he does a great job on stage, and things come up and it's like bloop, bloop. Here come all the servers, and everything looks fantastic. It actually doesn't quite work that easily.
They have these things called reference designs. The reference designs, I'll let you in on a little secret, they don't actually work. There's a lot of bugs in them. We find all the bugs, and some of the bugs only they can fix, and we tell them about those because only they can fix them. The rest of the bugs that we find that we can fix ourselves without them, we don't tell them about those. I think that's why we keep winning, because we're building a more reliable product at the end of the day, and that comes down to engineering and discipline inside our whole engineering organization. I think it's very different from other.
Look, we do what we say we're going to do. We don't overpromise. We say we're going to deliver it, we deliver it, and it works, and it's reliable, and you can count on it. Apparently, the other guys, not so much.
Yeah, no, that's a super interesting point, Michael. When you think about maybe an analogy to industry-standard servers where, I think maybe the level of complexity is definitely lower, and the level of differentiation might be lower, so the ODMs actually have meaningful share in sort of the, at least of the hyperscalers, for industry-standard servers. It sounds like the differentiation and the engineering could be so different here with AI servers that maybe the ODMs don't have the same kind of foothold. The Hon Hai and the Quanta of the world perhaps don't actually take as much share of the AI server market, potentially. Would that be something that you foresee as likely in the future?
Well, the way I would describe it is slightly differently, Wamsi. What I would say is that, if you're a hyperscaler, you can afford to build a massive engineering organization to go and do some version of the kind of work that we're doing. Those companies do work directly with the ODMs. I don't think there's a whole lot of companies that can do that. Certainly it would not include the new clouds. You've got also hundreds of these now cloud-native businesses that are consuming enormous numbers of tokens. No enterprise customer's ever going to try to do that. You're talking about an enormous engineering effort to do that. We still have some business with the hyperscalers, although that's not our main priority.
Yeah. No, that makes sense. From a share perspective, right, you clearly have been growing very, very fast in AI servers. How do you think about sort of a natural state of share for Dell in AI servers? There's been some disruption definitely at some of the other larger peers, so to speak, who kind of struggled with various issues beyond execution. In times past, has Dell been able to capture incremental share because of those sort of worries and not just sort of on a temporary basis, but more structurally over time?
Yeah. We've gotten some phone calls recently, panic phone calls from some customers. I think in the enterprise, we're certainly advantaged there, and we just don't see the other competitors as much there. I don't really know on the share. We're often asking the question, how high is up in terms of the size of the opportunity? Because we just keep seeing incredible waves of demand, and the lead times keep getting longer, and the orders keep coming in. We're basically seeing some super robust forecasts for future demand and responding accordingly. In core servers and storage, as I said earlier, we are quite a bit larger than others, and we seem to be growing.
Yeah. No, for sure. Maybe going back to.
Again, I think we're still in the early stages of agents and multi-agent systems and recursive self-improvement that will drive demand even further.
Yeah. No, that makes a lot of sense. Going back maybe to, Michael, your point earlier about sort of the near-death experience these memory companies have and had back a few years ago, two years ago, and now looking at sort of where we are with memory price inflation, sounds like not much new capacity is going to come online. I think overnight, Samsung reported some preliminary numbers, extremely strong. Memory price increases both in DRAM and NAND for Q1, Q2 expected. I think people were very surprised to see your fiscal year guidance, because people felt like with this memory price escalation, there was no way that anyone can navigate that to not have a down year-on-year for earnings. You had sort of committed to a long-term framework of 15%+ EPS growth, and frankly, you've driven higher than that now.
We're talking about 25% earnings growth in this fiscal year, which is just astounding. Maybe you can help explain to investors the secret sauce behind your ability here to overcome this intense margin pressure that the rest of the industry is seeing. I'm sure you're seeing increased costs as well, but you're somehow able to navigate it much better than others. What's actually happening behind the covers? What are some of the levers that you are using, and maybe across your business lines, that is helping more than offset the challenges of memory price increase?
Well, it shouldn't be super surprising that we're raising prices to protect gross margins. We're very agile in doing that, so we're able to execute that very quickly. We kind of saw what was happening, and we made appropriate changes. The other thing is, going back to what I said earlier, we're in a good spot in terms of supply. Everyone is subject to the price increases. Again, let's say you're at Bank of America or whatever bank you want to pick, or whatever customer, it would be bad if the price of memory went up, but it'd be worse if you couldn't get any, right?
Sure.
We've translated that situation into a successful outcome. I'll say there's probably some tradecraft in there that I'm not going to explain, but we're doing very well in this environment.
Michael, you mentioned sort of price increases, and obviously, some of this is pretty visible externally. This industry standard sort of pricing has gone up 30%-100% in some SKUs. There's been significant price increases across the board. When you think about the impact of these price increases on customers, what sort of buying activity are you thinking? How does that change the buying activity of these customers? Do we anticipate a pull forward of demand, or is the demand so strong and the value proposition of replacing a server is so great that we're going to see continued demand? Is it like, given your secret sauce and trade craft, you're able to take share, so even though there might be a pull forward in the first half, your share gains in the second half will let you do better than the rest of the industry.
Can you frame how to think about how the customer demand might be changing, and the seasonality of that could be changing because of what's happening with memory prices and your price increases?
Yeah, I am sure there's some pull forward but we don't know how much. We also know that there are some customers who will say, "Oh, the price has gone up too fast. I think I'm going to wait for it to come down." Right? Who knows? Maybe it will come down. I'm sure at some point it will, but probably not. Doesn't look like it's happening anytime soon. The problem, of course, is you can delay the purchases for a while, and there is some demand destruction, certainly in low-end phones and low-end PCs, but there are about 1/3 , about 500 million of the 1.5 billion PCs in the world are four years old or older. Okay? If you work at a company, let's say, with knowledge workers, and now you've got a four-year-old PC, this is a bad situation, right?
You're paying somebody $100,000 or whatever to do their job, and they've got this decrepit tool that doesn't work very well, and they know it. Yeah, maybe you don't like that the price went up, but eventually, you're just going to pay the price. You can delay, you can defer. Yeah, it's almost a question of when are they going to buy, not if they're going to buy. Then with servers, we still have the majority of our installed base of servers are 14G or older. We're on 17G now, so that's a seven-to-one consolidation benefit. I think customers are starting to figure out your budget for tokens has to go up. This just costs money. Obviously, we've put all this into our guidance with respect to the second half. Yeah.
I do think these are the kind of environments where we tend to gain share. I also think different from the kind of 2020, 2021 cycle, you've also seen us scale our cost structure in some pretty extraordinary ways. We're kind of fit to fight here for the future.
Yeah. No, great points. Michael, you mentioned, obviously, some of your relationships with these memory companies go back four decades. When you think about what the hyperscalers are doing, they're starting to try to have long-term agreements with a lot of these parts of the supply chain, whether it be components, whether it's memory, whether it be optical. They're investing aggressively into these companies in some ways. You've obviously done a phenomenal job. You're just saying that, hey, you've got enough supply to take care of your guidance, and you're trying to get more supply. As you think about your discussions with these suppliers, is that changing in any way because of what these hyperscalers are trying to do, which is just really go in and try to lock up as much supply as possible?
What I would say is we have had long-term agreements with a lot of these companies going back decades, and we continue to. Obviously, the nature of those adjust given the environment and the demand from the overall industry. Also, I think you have to understand, this whole intelligence thinking platform, it just uses memory in a very different way. That's a big change here. It's a lot less expensive to take a token that's already been generated, and store it, and pull it out of memory than to regenerate it. You end up with a much faster-growing sort of pyramidal system of memory from SRAM inside the logic to High Bandwidth Memory to DRAM to SSDs to ultimately rotating storage still around. Yeah. We have agreements with them. The demand is extraordinary.
We have the supply for the demand or for the guidance that we've provided for the year, and we're working on getting more.
Generally, from your perspective, the flavor of these long-term agreements that you've had is generally where there is a supply guarantee for Dell, and then pricing could be some variable nature. Is that the right way to understand sort of most of these longer-term agreements, or would you characterize that as not right?
Yeah, that's generally right. I think also, while some of them can be very opportunistic in periods like this, they're also quite concerned with the long-term health of their business and who the off-takers are over time. What do I mean by that? These guys are investing tens of billions of dollars in these plants, and they want to know how they're going to sell their parts in the next quarter or two, sure. They really want to know how they're going to sell them in the next three, five, seven , 10 years. We've always been a predictable, reliable, let's say low volatility customer, right? We don't show up when there's a disaster and say, "Oh, we've got to have a whole bunch of memory." We're there all the time, and we've been there all the time. That I think is super helpful.
Also just the breadth of our demand signal and the quality of our demand signal is appreciated. These guys have been whipsawed in the past by the hyperscalers, where they overbuild, then the hyperscalers turn everything off and stop buying. That does not happen with our demand. Our demand is a lot more stable, and that's appreciated, and that's part of the relationship. Also, we've treated them like long-term partners and they've generally reciprocated.
Yeah. No, that's a great point. Time's flying. We only have five more minutes here, and there's so much more to talk about. Maybe quickly here, I do want to touch upon what Dell is doing internally with AI. You mentioned agentic AI, you mentioned sort of productivity, different metrics, whether you look at it on a per employee, per profit metric, revenue metric. What are you doing with agentic AI within the organization, and where are you in that deployment sort of cycle?
I would say we're rapidly hurtling toward it. If you look at our monetization, we've been on this theme for several years. You've been hearing us talk about it. Our OpEx has come down for four years in a row, which is pretty unusual. This kind of started going in 2023, and we think there's still more to go, and significant scaling benefits. We've guided OpEx to a single-digit percent of revenue. That hasn't happened in 20 years, and of course, we have an even more R&D-intensive business than we did 20 years ago. It's 100% structural. There's tons of opportunity, and we're doing this while we're investing in our R&D, in our sales capacity, in our support, and in our supply chain.
I would say I'm super bullish on our ability to generate productivity, and ultimately, that makes us a much more competitive company, opens up paths to profitable growth, kind of as you've been seeing. Stay tuned.
Yeah. Quite amazing. Yeah, no, adding $30 billion more on top line with very little incremental OpEx, so quite amazing. Maybe in the few minutes that we have left, on a prior call, I think, with me, you said you are the ultimate long-term investor. How do you define success for Dell in the next 10 years?
I would say, the last five years, we outperformed the Mag 7, with the exception of NVIDIA. In aggregate, we outperformed the Mag 7, including NVIDIA. We outperformed every component except NVIDIA. Certainly, we look at our relative performance to the S&P IT index, and we see tremendous opportunities to continue to grow, generate strong cash flow. You've seen our capital return policy has been super shareholder friendly. Buying back stock, we bought back 54 million shares of our stock. We increased the dividend by 20%, and I think you'll see us continue to generate strong revenue, earnings growth, free cash flow, and the business should be a whole lot more valuable. Certainly we would endeavor to outperform all the relevant benchmarks by a lot.
Yeah. You are, continue to be a technologist at heart, and you have taken this company from basically you starting it to this amazing scale of now $140 billion a year in revenue with very fast earnings growth. I can see you're very excited about the future of Dell. Any final thoughts for investors why they should be just as excited as you are?
Yeah. In the last five years, right? We doubled EPS, 15% EPS growth, and we've talked about that as our long-term framework for the next five years. ISG is super well-positioned. Obviously, you know about our guidance for this year. Our portfolio's never been in better shape. Our operating model is super strong. We've got relationships with customers and supply chain, and we have a ton of levers in this business to drive EPS and cash flow and shareholder return. I feel very good about the future opportunities for our company.
Amazing. Well, Michael, we really appreciate your time. This has indeed been a great discussion. Always love your insights, love the work you're doing for our country, for the future. Just incredible always to speak with you, and it's been an honor and a pleasure. Look forward to catching up with you again, and thank you so much for all your time and your insights.
Thank you, Wamsi. Thanks, everyone, for joining us today. Take care. Bye-bye.
Thank you, everyone.