Dell Technologies Inc. (DELL)
NYSE: DELL · Real-Time Price · USD
217.53
+5.39 (2.54%)
Apr 24, 2026, 2:40 PM EDT - Market open
← View all transcripts

Status Update

Jul 17, 2023

Operator

Good day, and welcome to the Dell Technologies Ask the Experts Anything call. Today's conference is being recorded. At this time, I would like to turn the conference over to Paul Frantz. Please go ahead, sir.

Paul Frantz
VP of Investor Relations, Dell Technologies

Hello, everyone, welcome to today's Ask the Experts Technology Q&A. We wanted to spend the next 45 minutes outside our typical financial cadence, focused purely on technology, the major technology trends we're following, our strategy, and how we're innovating to support our customers. We plan for this to be the first of several conversations with our Dell leaders, so stay tuned for future sessions. Today with me are Jeff Clarke, our Vice Chairman and Co-Chief Operating Officer, John Roese, our Global Chief Technology Officer, and Jeff Boudreau, President of our Infrastructure Solutions Group. I'll start by sharing our safe harbor statement. Dell Technologies statements related to future results and events are forward-looking statements based on the company's current expectations. Actual results and events could differ materially due to a number of risks and uncertainties, including those disclosed in our SEC filings.

Dell Technologies assumes no obligation to update its forward-looking statements. Before we go to Q&A, I'll turn it over to Jeff to share a few thoughts.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thank you, Paul. Happy to be here to talk about technology with you all. We can talk about a number of technology trends reshaping the landscape, like data continues to explode and will define our world. Multi-cloud by design, picking the right cloud for workloads, cost optimization, complexity reduction, and operational ease. Zero Trust is the needed security shift that puts the good guys back in control. Compute and storage resources are moving closer to where the data is created, modernization, virtualization, and containerization of the telecom stack. I suspect most of our time will be spent talking about the most talked about technology today, generative AI, and rightly so. Gen AI is an inflection, it's disruptive, and it's game-changing. What an exciting opportunity! Let's widen the aperture for a moment. There are several types of AI, including machine learning, deep learning, computer vision, and generative AI.

All are growing with new and exciting use cases like computer vision for manufacturing lines, machine learning for supply chain, and chatbots that enable greater customer satisfaction. Today's $30 billion AI TAM is primarily driven by the first three. We view Gen AI as a new category of computing in the technology stack with very distinct characteristics. Gen AI doesn't replace anything, and it grows the IT TAM over time. It expands the use of computational machines to open up new ways of automation. The compute will be fed by massive, unstructured and object storage infrastructure, driven by more and more data. The models created by Gen AI will need to be protected as some of the most valuable data ever created. Generative AI, powered by LLMs, large language models, has dominated the AI narrative. These are amazing models, generalized to answer any question.

They are trained over very expansive datasets. The dataset can be of one form, like text, or of many forms: text, pictures, audio, source code, et cetera. The utility of these models with hundreds of billions of parameters is wide and general. They require massive scale, immense computational density to train them. From our customer discussions over the last six months, four generative AI use cases have consistently risen to the top of CEO's lists, customer operations, content creation and management, software development, and sales. What we are seeing is customers wanting to use their data, processes, and business context to train the model. As a result, we see generative AI having two distinct variants. The first being massive, multifunctional, generalized models, GPT-4, Bard, ChatGPT as examples, and domain-specific like Galactica, PubMedGPT, or Dramatron, or enterprise-specific models like Stable Diffusion, BloombergGPT, and CodeGen.

What's the difference, you might ask? First, the core neural network-based language model is essentially the same. The difference lies in the number of parameters and the number of expert systems focused on unique functionality like common sense reasoning, translations, pattern recognition, reading comprehension, and code completion. Massive, multifunctional, generalized models have many more parameters and expert systems. They are trained on a broad and large volumes of data. Domain and enterprise-specific Gen AI use smaller proprietary datasets with fewer parameters and expert systems. These models don't require a massive scale or cost, and in most cases, they can be trained up with a small cluster of servers with GPUs and perform inference on the edge with a single server.

For example, an enterprise-specific model built with open source Falcon with 7 billion parameters on your data can be trained with four Dell PowerEdge XE9680s, and once the model is trained, inference can be performed on a single Dell PowerEdge R760xa. To use Jensen's characterization, we will have AI factories at the edge, in the data center, and in the cloud, in other words, everywhere, resulting in a range of Gen AI infrastructure solutions. Dell is uniquely positioned with our broad portfolio and services to win in this new category of computing by helping customers size, characterize, and build the Gen AI solution that meets their performance, cost, and security requirements. We are very early in this cycle, not dissimilar than 24 years ago with server virtualization or as far back as 40 years ago with the introduction of the PC.

Disruptive, game-changing, more to learn, more to come. With that, we'll take your questions.

Paul Frantz
VP of Investor Relations, Dell Technologies

Thanks, Jeff. you know, we'll ask each participant to ask one question to allow us to get to as many people as possible. Let's go with the first question.

Operator

Thank you. If you'd like to ask a question, please signal by pressing star one on your telephone keypad. If you're using a speakerphone, please make sure your mute function is turned off to allow your signal to reach our equipment. Our first question is going to come from Wamsi Mohan. Please go ahead.

Wamsi Mohan
Senior Equity Research Analyst, Bank of America

Thank you so much. Thanks for doing this. I guess the question I have is, when you think about the technology differences from Gen AI, and sounded like Jeff, you're saying the TAM is all incremental, can you talk about how you would characterize a server configuration for Gen AI versus a typical industry standard server that you sell, both in technology terms and in ASP terms? I'd also be curious if you think about how the evolution of the interconnect is going to be, for now, seems like InfiniBand is fairly dominant. Do you have a view on whether that stays dominant or if Ethernet starts to get incremental traction as well? Thank you.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thanks, Wamsi. Let me take a run at it. John and Jeff can certainly fill any holes. I mean, first of all, we believe the TAM is expansive. This is game changing and disruptive. We just think it's an opportunity with what this enables. I called it an inflection point in my talking points. This is really a fundamental change from how we develop products to how work is done. With that, we just believe this is incredibly incredible opportunity for companies to reinvent themselves. When I think about specifically your question, what does the average configuration look like? There's actually a correlation of model size to how much memory you need. Bigger model's, more memory. We've modeled that, characterized it, we have a sizing and capability to help our customers through this.

If you're running a 7 billion parameter model, 20 billion parameter model, you can almost do twice as much memory to be able to run the model effectively. You look at the ability to run the actual computational side. The configuration today with our 9680, with its eight H100s are clustered together for groupings of eight, allow you to extend the ASP of our products quite substantially. We think AI drives richer configuration, by the way, all the way down to the PC. Doing more with your PC drives a richer configuration. We don't see any scenario that running these algorithms, training these algorithms, using your data to be able to build your models and then ultimately do inference out wherever you deploy it, doesn't drive richer configurations and ultimately richer PCs.

In terms of ASPs, they're significantly greater than the average server. Just the content of memory and the content of the compute intensity drives that. Jeff, John, anything you would add?

John Roese
Global CTO, Dell Technologies

No, Jeff, you know, I think you're characterizing the TAM, it's just broad in general. I mean, I can take a stab on interconnect. You know, we like to talk about compute, and we definitely like to talk about storage as components. Obviously, the network, broadly in the AI world also gets disrupted to some degree. You know, when you think about a training infrastructure, it's not a singular node, it's a cluster of nodes. We're seeing in data centers, both InfiniBand and Ethernet. That's an ongoing, you know, two technologies competing with each other healthily, and we're seeing them both advance, and we play both sides of that equation. The reality is we will continue to see demand.

As computing performance goes up per node, the IO path to and from those computing nodes within the cluster will go up. Another dimension of the interconnect, though, is the storage interconnect, because it's not just compute the nodes in the neural net talking to each other, it's how fast you can feed the data into those environments. This is an area where high-performance storage architectures that we happen to be quite good at, are important. Having gigabit links from the storage environment into the training cluster is as important as the training cluster itself being able to shuffle data between nodes.

Then kind of a third dimension of interconnect is the IO path into the inferencing infrastructure, which, you know, that has profound impacts, because if we're talking about, you know, processing imagery or media, the IO path from the sensor, a camera or otherwise, across a 5G network or across a SD-WAN or even within an enterprise environment, also it gets essentially higher utilization. You know, in general, you know, storage, compute, and networking all work in concert to build out these next gen AI architectures. You know, from our perspective, there isn't necessarily a particular winner around just Ethernet or InfiniBand. There's going to be competition there. In the storage side, Fibre Channel still plays a role along with InfiniBand and Ethernet connectivity. Again, what matters is how performant your storage systems are, so they can actually fill those pipes.

Thirdly, broadly in inferencing, you know, it's going to require, you know, relatively high performance from any place where data is created to the place of inference, which means that things like our edge servers, if you didn't notice, are, you know, generally coming with a default, higher performance NIC, higher performance Ethernet interfaces. The reason for that is to be able to basically digest a stream of data that has to be inferenced at the edge. Yeah, interconnect's a big deal. It's, you know, probably secondary to compute and storage, but one of the three pillars.

Jeff Boudreau
President of Infrastructure Solutions Group, Dell Technologies

Thank you so much.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thanks, Wamsi.

Operator

Our next question will come from Simon Leopold from Raymond James. Please go ahead.

Simon Leopold
Managing Director of Data Infrastructure/Semiconductors, Raymond James

Great. Thanks for doing the call, thanks for taking the question. I guess I want to maybe get your perspective on how you expect the market to evolve. In that, I'm imagining that much of the spending is biased towards hyperscalers building infrastructure and training AI clusters today, and that over time, we see essentially the uptake by enterprises, your customers, to do more inferencing and to leverage their own data to utilize the systems being built by hyperscalers. I guess I want to understand whether you think that's a rational way to look at the market or you believe the enterprises will build their own clusters. That's part one. Part two is really if we're right about enterprise adoption coming later, how do you envision that evolution or timeline of enterprise spending on AI implementation? Thank you.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Let me see if I can unpack that, Simon. There's a bit there. I'll take a swing at it here and see if I can answer your question specifically. I mean, for me, the big step back is this notion of inflection that I used earlier and how game-changing this is, and how it's really going to impact every sector and every type of function in every sector. Because of that, we're seeing CEOs of every size company have discussions of how to take advantage of Gen AI. The reason for that I think are reasonably evident, but let's be all on the same page. The first is it's going to change the way our work is done. Two, it's gonna change the way we build products. Three, it's gonna change the way we service customers.

Probably four, I think it's obvious, but I think it's worth saying, there's a productivity and efficiency element of this, and it's why I believe it's fundamentally additive to our industry. I've been at this, excuse me. I've been at this a long time, as you know, how many times has the world dropped a 10%, 15%, 20% productivity improvement upon us? To my knowledge, it hasn't happened in my working lifetime. That's what's here in front of us, and it's why it's the topic of every CEO, excuse me, and every board. Because of that, companies are trying to understand how to deploy the technology, how to better understand it, how to use it on their business context, how to apply it to their business models. I don't think that can be done in the broad, general, foundational models.

Is there a lot of math? Great work to be done. Is there a lot of applicability? Without question, we'll continue to see that. When you want to start talking about your product development community, how you address your specific customers, how to use your marketing data to serve your customers better, how to use your telemetry data to serve, and they build a better service model, we believe that type of information is proprietary, unique to the business. As a result, we see enterprises building out that capability. Today, we have lots of pilots underway with medium-sized companies, large companies, and enterprise-scale companies. We see that continuing. What they're asking from us is help them size it, characterize it, get their data prepared, and help them operationalize and build these AI factories that we refer to. That's the opportunity.

How it's staging today, look, we're seeing a lot of work in the hyperscalers. The massive scale of the models that I described require that sort of scale, but that's not where we see the vast majority of the models being built and will be being deployed. John, Jeff, anything you would add to that?

Jeff Boudreau
President of Infrastructure Solutions Group, Dell Technologies

For me, I guess also other things that we've talked about in the past and what our customers are telling us, I would say there's concerns around data privacy and data security in regards to some barriers of everything being in a hyperscaler cloud over time and why they need to bring that on-prem or in their environment. Jeff talked about the different use cases and the attributes and the resources that you need to serve those things from LLMs. A lot of those are for hyperscaler now because they need massive scale. As you kind of lean into some of these domain-specific areas of the inferencing, you need a lot less resources to go do that. Customers are looking at it because of some of the data privacy, the security.

You know, Jeff and I have talked to you before, I think with a lot of you around, you know, physics and latency and bandwidth matter, and the need for real time and near-time speed, is all critical, which actually leans into having infrastructure on-prem for the enterprise use cases. You know, that's a big thing that we hear from our customers as well as we go forward.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Well, I think the other thing, John, you're a resident expert in this, and Simon, maybe this helps build the case. We see the algorithms evolving with smaller datasets, that are really honed for specific enterprises and specific, what we call domain and process-specific knowledge. The fact that you're seeing decentralized AI occurring, which is there's a place where the model is trained, there's a place where the model is tuned, and there's where inference is done. Back in the day, that was all the same system. We're seeing decentralized architectures accelerating. John, if I misspoke, you should correct me.

John Roese
Global CTO, Dell Technologies

No, I think it's important. You know, we're kind of in this moment in time where we see Gen AI as if it is the first time we've played with AI. Just to remind everybody, you know, we were about to enter what many of us called wave three AI algorithms. You know, the first generation were basic machine learning algorithms. Then we moved into the neural networks and deep neural network environments. These, you know, what happened between wave one and wave t wo is things got bigger, meaning we saw this trend where, you know, you needed a lot of compute capacity to build out a DNN. That was nice, but then the industry started to shift.

We realized, hey, to make these commercially viable, we created technologies like transfer learning, which allowed you to take an existing model and only retrain one or two layers of it in the neural net. We did things like federated learning. That allowed us to distribute the learning process across all the way out to edge nodes. Anyway, all that was the narrative, by the way, the day before the Gen AI world took off. It was basically a shift to rules, engines, transfer learning, all kinds of things that would make the system more efficient and easier to deploy over a larger set of topologies, meaning small entities, et cetera, even out to a PC.

Gen AI popped up, interestingly enough, you know, based on the nature of the first generations of it, things like GPT-3, GPT-4, ChatGPT, Bard, et cetera, you know, the problems they were going after were generalized applications. They're very large scale, and quite frankly, you know, everybody kind of assumed that AI, for all eternity, was only large things, even though the day before it was trending towards distribution and smaller. Our prediction is that now that we have a baseline of the large-scale Gen AI models, which are very good for public generalized services, now we want to take them to the enterprise for more specialized, narrow scope, distributed environments. We'll just begin that journey again.

We'll start to figure out ways to make them more efficient, which is code for reduce the burden on infrastructure, reduce the power consumption, reduce the amount of data necessary, but still make them valuable. We go through this periodically in the AI journey of, you know, jumping to a new order of magnitude and then figuring out how to optimize it so that we can run that thing, not only in the privileged space of a couple of infrastructures, but anywhere the customer wants it.

That is likely going to be one of the big trends that we see as the market evolves, not just who adopts it, but the composition of these systems inevitably will start to become more efficient, and that's code for being able to run it in more places and more diversity, which basically will catalyze the industry, in our view.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Hope that helps, Simon.

Simon Leopold
Managing Director of Data Infrastructure/Semiconductors, Raymond James

That was great. Thank you very much.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

You're welcome.

Operator

Our next question is going to come from Erik Woodring from Morgan Stanley. Please go ahead.

Erik Woodring
Managing Director and Head of US Technology Hardware Equity Research, Morgan Stanley

Hey, guys. Thanks for hosting this session and for taking my question. You know, Jeff, I just maybe want to take a comment you made earlier and maybe expand on it. That was, you know, you obviously, or a lot of people have been spending time on the ISG side about this Gen AI opportunity. I'd love if you could maybe tease out some of your comments on the PC side and the opportunity for CSG to benefit. You know, will AI at the edge kind of catalyze PC refreshes? How do we think about the timing of that? What's the incremental kind of componentry you might need to add to your products to handle these AI workloads?

Just maybe a couple incremental details that would help us understand the opportunity for Gen AI, specifically in PCs. Thanks.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Sure, I'd be happy to, Erik. Look, anytime that we can get a new technology that drives productivity into the best general purpose productivity device on the planet, we're better off. When we look at what Microsoft's plans are with AI and future versions of Windows, it's doing just that. It's going to make the workforce more productive. Anytime we've seen that with previous versions of Windows, it's driven a substantial refresh cycle. We think that's the opportunity here. What's equally important about this refresh cycle, you're going to ask your PC to do more with some form of assistant, some form of, let's say, do some language modeling or language processing. It's going to do some machine learning with the capabilities that we'll put in our PCs in the future, and that's going to drive a higher ASP.

You're going to need a more capable PC to ask it to do more. Again, that's good for business. We've seen ASPs increase over the past three years. The likelihood that continues asking more of it is highly likely. You have the subcategory in PCs, one of my personal favorites, workstations, engineers, developers, creators, designers, data scientists, working at the edge, using those high-performance PCs with GPUs in them to do more complex AI tasks. You'll see next generation PCs with NPUs in them, neural processors, and those are going to allow us to do some of the basic neural processing that John referenced earlier that'll be in every PC going forward. It's quite efficient way to do it, optimize for cost and power. We're pretty excited about the new PCs that we'll be building.

On top of the embedded AI services and capabilities that we put into our service stack and our software stack already in our PCs today. Today, we do a lot of work around how to help customers optimize the performance by workloads, the telemetry systems that our service organization has tied into our PCs. We'll be able to extend that customer experience more broadly. I know it's a long-winded answer. I hope that gave you some context of what we're thinking about, what we think the opportunity is, and how we extend more AI to the edge. John, anything on this from a core technology point of view on the PC?

John Roese
Global CTO, Dell Technologies

You know, the only thing I'd add is, you know, one of the challenges, first of all, there's three different, you know, reasons why you need AI in a PC, and you're probably familiar with them. I mean, one obviously is, or you need acceleration in a PC for AI. If you're developing models, having a good high-performance workstation with GPU is pretty helpful. We know that with our Precision line, that we've been doing that for a long time. The second, though, is the proliferation of copilots. Individually, each copilot you run probably doesn't need a massive accelerator, but if you think about the, you know, the not too distant future, you're not running one c opilot.

You'll have a copilot doing transcription, a copilot doing translation, a copilot creating automated imagery, a copilot filling in the gaps in what you're talking about with contextual information. As you add more and more, quote-unquote, "AI workloads" to the system, the idea of having a portion of your semiconductor allocated to do that in an optimal fashion makes a lot of sense, and that's why we can see this inevitable trend towards within the CPU or in other types of accelerators, more and more of that functionality. Kind of brings us to, you know, the, you know, the third area, which is really the user experience in general, it gets transformed as we do this. We expect just more immersive user experiences on the PC because there's more parties involved in it. These copilots actually manifest.

They show up in better imagery, you know, it's a pretty profound impact on the PC over the long term because it's the interface the customer has, and it's also the place where you localize many of these experiences around copilots. The combination tells us we're going to need richer user experiences, spatial representation, just greater depth of field to be able to present the kind of data that we're at. We're going to need both direct processing and copilot processing, if you will, as the number of kind of AIs working on your behalf around you increases.

You know, we don't know exactly what they're all going to be, but we know there's going to be many of them, and we know that the platform they're likely going to run on will be a distributed architecture, which the PC is the kind of personal representation for the user. You know, it's a journey, but You know, we don't see any other path other than more and more processing on the PC, more of it dedicated to AI-type tasks and a richer user experience, which you can imagine all of those things are pretty good for us.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thanks for the question, Erik. I hope that helped.

Operator

Our next question comes from Asiya Merchant from Citi.

Asiya Merchant
Director and Equity Research Analyst, Citi

Hi, good afternoon. Thank you for taking the question, and thank you for the session. One of the questions that we get from clients, you know, is just as the spending is kind of being allocated towards AI, do you see that as, you know, temporarily perhaps taking funding away from a refresh, whether that's on the PCs or further dampening, you know, enterprise spending on servers and storage after a very strong calendar 2022, if you may? Of course, we have the macro pressures, et cetera. People are just trying to gauge if this, you know, the spending on AI is sort of taking it away from some of the more spending that could have happened, at least here in the second half of calendar 2023, on mainstream server storage, and definitely as we look at the PC ahead of a Windows refresh. Thank you.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

I've probably looked at it through the lens that's a little longer term. I don't think this is at the expense of one or the other. This is additive. Again, if I think about what we're hearing in our discussions with customers, the four use cases that I described and how CEOs are viewing this in the boardroom, this is really a discussion where generative AI can change the basis of competition. It changes the way we're going to develop products and serve customers. It drives a significant productivity increase that I mentioned earlier, but to restate it, because I think it's worth restating, how many times have we, in this working generation, had a 15%, 20% productivity improvement? I can't think of one. This is additive.

CEOs and boards are looking at this opportunity as not at the expense, but that's the real productivity improvement that is out there. How can I not do this? If I don't do this, I could be left behind, and I'm left behind, I may not catch up. This truly changes, I think, the basis of competition for many companies. It's going to disrupt cost structures. It's going to disrupt, again, how you serve your customer in a more intimate way. If you can figure out how to get ahead of your competitor or competitors in any given sector, it's a huge advantage. Our experience with our customers and talking to CEOs and the market research we've done suggests they're not thinking about, "Oh, if I'm going to add AI, I'm going to not do this AI project.

I'm going to actually extend PC lives by six months." They're going, "I have to invest." This is that game-changing. John uses the word inside our company as an inflection. This really changes the way to how technology is going to be used, and I think he's right. It's certainly in the discussions we're having, really, how leaders and I think how our company, as a leader in our company, we're thinking about: How do we take advantage of some of the coding assistance or just organizations looking at 20%, 30% productivity improvements based on the complexity of code? We think about how much work is actually language-based, depending on whose research you look at. Anywhere from roughly 60% of all work is language-based. 60%-ish of that could be addressed with generative AI technologies.

When you think of it as in that perspective, it's game changing. As a result, I know leaders like myself are going, "We need to invest. We have to stay competitive. This changes the way we're going to deliver and build products for our customers and serve them." I know that wasn't quite the direct answer, but it's not about the TAM of this quarter and what's going to happen. It's that much of game changing. We think of this in terms of this is an industrial revolution. This is the steam engine, this is the assembly line, this is the internet. This is what the PC was 40 years ago and what it did to productivity. It's all happening and happening at a much, much faster rate.

John Roese
Global CTO, Dell Technologies

Thanks, Asiya.

Asiya Merchant
Director and Equity Research Analyst, Citi

Thank you.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Of course.

Operator

Our next question is going to come from David Vogt from UBS. Please go ahead.

David Vogt
Equity Research Analyst of US IT Hardware and Networking, UBS

Great. Thanks, guys. Great, guys, thanks again for doing this. This is helpful. I want to pull back and maybe spend some time on storage. I know you talked about richer configurations on compute and ultimately richer configurations, you know, at the edge of the network, particularly at the CSG side. What, you know, when I think about generative AI and the other flavors of AI that you touched on earlier, what does this mean for storage demand? Particularly, are we going to see new sort of demand for whether it's, you know, more basic levels of storage that's software-defined, that's a little bit cheaper to deploy? Are we going to see more, you know, all-flash systems deployed?

How do we think about the data that's going to be developed and used for inference as whether it's hot data, cold data? Just trying to think about all the different permutations of how this could play out, you know, as the enterprise starts to spend more aggressively on storage. Ultimately, can this, from your perspective, be delivered as a service from the storage side? Is that part of the thinking going forward from Dell's perspective? Thanks.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thanks for the question, David. We'll try to unpack it a little bit. I'm going to ask my resident storage expert to come in and help me in a minute. If you step back and we look at what's happening, clearly much of this data is unstructured, and it's going to come in in a massive scale. If we go back to one of the fundamental premises that's been driving the industry, the rate of data creation is not slowing, it's accelerating. It's absolutely the case. It's really accelerating outside of the walls of the data center, as we've talked about many times, out on the edge of the network, and the form that it's taking is unstructured, and it's coming at massive scale. The types of systems that we're thinking about have to be able to scale that way.

They have to be able to respond. To your point, well, we use that notion inside our company. The hot data at the edge being handled in real time doesn't lend itself to take a long trip up to some cloud and back down. It lends itself being treated right there. The triage has to happen where the data is created. The algorithm is going to be run. That's where you're going to see some of the micro-tuning being done, where you'll see drift detection being done, and where we'll be modifying the inference as a result. I mean, those things all play well to, I think, certainly our strategy and what Jeff and the team have been building in ISG. You think about all-f lash, what we've done with NVMe, you think about the scaled-out architecture that we've built with our unstructured products.

We have a very broad and deep portfolio to meet the needs to where the data is going and the type of data that's being created. Jeff, I know you can make that sound a whole lot better than I did.

Jeff Boudreau
President of Infrastructure Solutions Group, Dell Technologies

Probably not, Jeff, but I'll try it in here a bit. I think about data growth, I think of data gravity, you know, and I think about kind of where we were in time and kind of where things are going with Gen AI and all things AI, which is all about data, right? Ingesting data and then making sense of that data. I think about infrastructure, probably going back to the last question, it's the foundation for me. It's the foundation for all things AI, you know? It's really important to understand that while compute is at the center of most Gen AI infrastructure, that compute will be fed by massive data sets and storage, you know, infrastructure. I think that's why your question is so important and near and dear to me.

The models that are going to be created by Gen AI need to be stored, and they're going to need to be protected because this is actually going to become some of the most important data the world's ever seen as we go forward. It's much broader than compute, kind of where you were going. AI will definitely drive demand across, I believe, all parts of infrastructure. It's going to be compute, it's going to be storage, it's going to be data protection, and it's going to be edge, where Jeff just was a minute ago. It's going to be networking, where John was a few minutes before that. It's even going to expand that client experience that we were talking on with the PCs as well. I just think there's so much opportunity.

Specifically with storage, right now, the opportunity is both structured and unstructured. In full transparency, I know people want to lean to the unstructured. Jeff's completely right. Unstructured is where the growth is coming from. Think of a parallel file system, think about object scale type performance. The need where Jeff was before is latency is going to matter, no if, ands, or buts. Making sure that we, you know, have either real-time or near real-time insights is going to be critical. Leveraging things like flash is going to be important as we go further and further into the future. By the way, not just flash, other media and network and protocol opportunities as well. I would say also software-defined, you nailed it. I think it's going to be the future, right?

A lot of times today, we have purpose-built systems targeted at as.

Targeted at opportunities. I think software-defined, if you think about the massive scale where things are going, especially in the unstructured space, I think software-defined is really it's going to be at a point where we go from purpose-built into the software-defined world more and more every day. Customers can scale with their data sets as they scale as well. Lastly, yes, I do believe it can be as a service as well. I think data as a service is definitely a ripe opportunity for us as we go forward.

David Vogt
Equity Research Analyst of US IT Hardware and Networking, UBS

Great. Thanks again, guys.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thanks, David.

Operator

Our next question is going to come from Steven Fox from Fox Advisors. Please go ahead.

Steven Fox
Founder and CEO, Fox Advisors

Hi, good afternoon. Thanks for taking my question. Jeff, the company has obviously had a roadmap to deploy technology on the edge for a while. You mentioned how AI at the edge is going to become critically important, and you also touched on manufacturing, where obviously Dell knows a lot. I was wondering if you could sort of pull those intersections together and talk about how AI and your products are going to play in manufacturing and how you envision manufacturing changing with the deployment of AI. Thanks.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Sure. I mean, a lot of our work since a year ago with our streaming data platform, some of the early partnerships that we've built in Jeff's edge organization have all been manufacturing-based, around how to build modern manufacturing facilities, how to do visioning, how to do higher yields, and things of that nature. I think it's absolutely essential to what we're doing out on the edge, and the data is key. What kind of use models can you imagine happening? We can do preventative maintenance schedules. We can help with production planning. We can think about how to do forecasting, visual inspection, quality management, how to increase labor productivity.

Obviously, health and safety inside facilities, I think, are all aided by AI use cases on the factory floor and with our edge platform, and what we've been doing there. The fact that most of that data tends to be unstructured, that makes the point that we talked about a little bit earlier. I think that kind of gets around your question, Steven. If not, please ask again or ask maybe more clarification.

Steven Fox
Founder and CEO, Fox Advisors

No. No, that was helpful. I was just trying to bring some of the points together since you touched on a bunch of them. Just the one thing you left out is sort of how you envision this playing out over months and years in the future. Like, how close are we to seeing some of the, you know, more advanced uses for, say, forecasting and preventative maintenance and things like that? Thank you.

John Roese
Global CTO, Dell Technologies

Yeah, you know, maybe I can jump in a little bit.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Sure.

John Roese
Global CTO, Dell Technologies

There's a really important intersection. You know, Jeff just listed a whole host of AI use cases in a factory. The question is, do you want each of those to run on its own discrete infrastructure, or do you need to build an edge platform so that in the multi-cloud world, those are all just software assets? You know, if you look at our edge strategy, I mean, it's got three layers. The foundational layer is the hardware. We launched all kinds of new edge servers, and you'll notice something about them, that most of them have more accelerators than CPUs. There's a reason for that, because we expect them to be the landing point for lots of AI processing.

The second layer, though, is the NativeEdge announcement that we made, which basically says we really need to separate the logical and physical edges. We need to have a physical edge platform, which is the capacity pool, where things like Zero Trust and Zero Touch and the base level of capacity lives. We need to treat the edge workloads, you know, an image recognition package that's going to monitor an assembly line for quality assurance issues, or a quality assurance mechanism that's going to look at sensor feedback on the voltage level of the production systems themselves, as just software packages. They're containerized code running on, you know, a platform. The trick is, with NativeEdge, which is the uniqueness about it versus other offerings out there, is we've turned it horizontal.

We can orchestrate and deliver code from whichever clouds and upstream services you want as containers on that common platform. In manufacturing, that's really one of the first places where that materialized because the diversity of digitization that's going on in the factories is just spectacular. Everything from HVAC monitoring, to power conditioning, to visual inspection of the production systems, these are all what we'll call apps that live out in that environment. Some of them are connected to public cloud tool chains. Many of them are delivered by industrial companies. You know, we work with all of them, and most of those companies are working with us and others to refactor their code to run this containerized code. We see this convergence that the digital factory of the future, yes, is heavily AI-powered, but more importantly, it has the constraints of being in the real world.

It needs to be on a highly efficient platform, which is not just the hardware, but the ability for you to kind of do whatever you want in the AI world as just a software function, which really lends itself nicely to the NativeEdge story. We see this intersection between the digitization of the factory, the need for an edge platform architecture, as opposed to a whole bunch of mono edges, and a new cast of hardware platforms that are actually optimized to run AI workloads as their default behavior. Those are kind of the tick marks of what we did with our ecosystem, what we did with NativeEdge, and what we did with the earlier announcements, where we launched a whole bunch of new edge platforms that were optimized for this. It's a very important space, and it is a leading indicator.

From an AI consumption perspective, manufacturing is one of the first markets to move. They were doing it before Gen AI, and they're going to do it even faster after Gen AI, and I think we're pretty well positioned for that.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Maybe a little self-serving into that, Steven, is the Dell supply chain has been digitizing for years now. We are using AI to do our production planning. We're using AI to do scenario planning, which is how we made it through the COVID and got faster decisions made in real, near real time. It is what we're doing in our logistics network to improve our delivery accuracy to the hour and day, which is going over well with our customers.

Steven Fox
Founder and CEO, Fox Advisors

Great. Thank you very much. I appreciate all that color .

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

You're welcome. Of course.

John Roese
Global CTO, Dell Technologies

Thanks, Steven.

Operator

Our next caller is Samik Chatterjee. Please go ahead.

Samik Chatterjee
Managing Director and Equity Research Analyst, JPMorgan Chase & Co.

Yep. Hi, thank you for taking my question. I guess I, on the ISG side, I had a couple of questions, and mostly, I mean, you referred to this as well in your prepared remarks. Power consumption of AI data centers is a major concern right now towards a large-scale deployment. Just in terms of how you're thinking about Dell participating in that solution, how are you working with your suppliers and addressing that? Secondly, investors do want to see association with NVIDIA, for most of the companies in the ecosystem. Outside of that, what are you seeing in terms of interest from enterprise customers and having a wider portfolio when it comes to, like, an AMD or Intel? What are you seeing in terms of engagement or customer willingness to sort of look at those, evaluate those solutions?

What's the current sort of breakdown of your portfolio on that front between NVIDIA-based servers versus AMD or other diversified suppliers? Thank you.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Sure. Maybe a couple of thoughts on power consumption, cooling, and then the ecosystem, and then I'll ask the pros, though, clean up for me. It's why we believe designing these things from the ground up are important. We worked on the 9680 for years with NVIDIA. We were able to build a system that took advantage of our own iDRAC capabilities, so we could drive power efficiency across all of the PCIe components. It allowed us to put some pretty robust controls in, that we can actually work through the temp transients during the various AI workloads. We can tune in other words. We've customized the airflow. We knew this was going to be a challenge. At 700 W of throw times eight, there's a lot of energy being dissipated.

We've designed systems that can be more predictive or we can, if you will, we can manage the acoustics. It's just getting a bunch of fans blowing across these things, deafening people. We've actually looked at the acoustic design along with the thermal. We continue to look at new technology in Jeff's organization around liquid cooling and how we use cold plates on GPUs and CPUs and other PCIe components. It's how do we build different standardization around the cooling interconnects so the systems are efficient in their heat transfer. We're looking at new technologies in air cooling, memory cooling, some of the pretty advanced engineering that goes along in Jeff and John's organizations that allow us to really think how we provide a system that can be cooled on the square that it goes into the data center.

We know these are computationally intense systems. We've designed them accordingly with a lot of forethought. In terms of our customers asking for alternatives, we'd love to see a rich and vibrant ecosystem of AI accelerators and NPUs. We will. They're under development. Clearly, NVIDIA has a lead. It's a wonderfully capable performing product and certainly has the industry's attention, which means people are trying to develop alternatives. John's team is engaged with, well, correct me, because I'm sure I'll misremember the number, over 50 different silicon companies developing purpose-built accelerators. We see the trend going from general purpose accelerators to purpose-built accelerators. We see some of the folks trying to use integer four and integer eight sort of algorithms to actually simplify the calculations, do them faster without sacrificing accuracy.

There's a ton of fascinating architectural work on the table in the broad supply base, even as much as taking some of these simple algorithms and printing them on silicon, if you will, making them incredibly efficient from a power point of view. We're mapping this. John and the team continue to work through this. Jeff's team, all of the engineering accolades that I talked about are what we're building these optimized systems. It's why we don't think everybody can build these. It's why we're selective with our choices, and it's why when we obviously put our Dell brand on it, we believe at scale, and why we built the services around this, that we can deploy multi-clustered nodes to help enterprises deploy AI workloads, and they will be reliable, and they will work.

Jeff, I gave an earful, so if I missed something, fill in, please.

Jeff Boudreau
President of Infrastructure Solutions Group, Dell Technologies

Actually, I think you did a good job, but it's in the spirit of the kind of the short term, you talked about both, you know, what we've done around air cooling but also around direct liquid cooling. We're also working on a power sizer for our customers and our partners, so actually, they can go to our customers and actually see what the most efficient, most effective, most sustainable infrastructure can be for what they're trying to deploy. If it's something large or something much smaller, they can actually lead to the right architecture and technology to support their needs.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thanks, Samik. I hope that helps with your question.

Samik Chatterjee
Managing Director and Equity Research Analyst, JPMorgan Chase & Co.

Yep, no, great. Thank you. Thanks.

Operator

Our next question.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

I know we're running a little late. That's okay. Paul sent us a note. We're gonna stay a couple extra minutes. Next question, please.

Operator

Okay. Our next question is going to come from Sidney Ho, Deutsche Bank.

Sidney Ho
Equity Research Analyst, Deutsche Bank

Great. Thanks for doing the call and taking the question. I think we can all appreciate why customers wanting to deploy AI capabilities on-prem instead of through the public cloud, maybe you can touch on that a little bit. For on-prem or maybe Tier 2 cloud, what are some of the reasons your customers buy from Dell instead of directly from the GPU supplier or from some other hardware suppliers? In other words, what does Dell offer to differentiate from competitors? Along the same line, how much customization are your customers asking for, and what are the opportunities to generate ongoing revenue from the same customers? Thanks.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

It's a lot of questions. Let's work our way through that. I would start with the systems we built are built from ground up with our technology partners to be able to be deployed at scale, sort of the ending point of the previous question. We designed all sorts of thermal characteristics into these products, how to dissipate the heat that we were just talking about, performance attributes, how these can be managed. We are experts in clustered systems, multi-node systems. That's what we build, have been building for a very long time. To be able to drive some of these workloads, the example that I gave in our opening remarks of a cluster of four 9680s, which would have 32 GPUs in it, the interconnect to do that is quite complicated. We've engineered that, in that case, working with NVIDIA.

Equally important, where I think is a tremendous amount of value add, is on our services layer, where we have professional services, consulting services that allow us to help customers through their challenges today. I think about that in sort of the following way, if you will: How do we help simplify the generative AI design so they can get these things deployed and put in at scale? At the minimum, we have to scale the compute from one to 64 nodes. Jeff and his experts, these things scale. We need to be able to scale from terabyte systems to petabyte systems. We need to have the bandwidth to be able to scale across that vast CPU network and the multi-cloud cluster management that goes along with it.

We ultimately are trying to help our customers work through the entire AI life cycle. How they do inference training, machine learning ops, how do we help them with the software, the hardware, the support and service that goes along with it, all the way to the point where we actually may provide a managed service to help our customers. Jeff, John, and I were actually talking this morning. We were talking about an AI APEX offer, where you can imagine we extend this as a managed service. I think that litany of examples gives the breadth of capability our company has, from purpose-built portfolio of accelerators to the largest storage portfolio in the marketplace, to the understanding of how to build multi-clustered systems, to the service capabilities, professional service and consulting service that goes along with it.

The fact that our AI strategy has been around for a bit of time, where we talk about, A, artificial intelligence in our products, on our products, helping our customers deploy them, four, inside our company, what we're doing in the partner network, and we have the resources behind it and the investment behind it to bring it to fruition. That's how I'd answer that question, Sidney. I mean, Jeff and John, if I gapped something, I know you guys will fill it in.

John Roese
Global CTO, Dell Technologies

Yeah, I think, one, you know, easy is important for customers because this is hard. There are very few customers that have all the capability to do an AI project all by themselves at the component level. That is not even a logical place to start. You know, obviously, we know our friends in the hyperscaler world have a focus on one platform to work with, and they get an advantage in some cases of being easier. When you get to the non-hyperscalers, we're fairly unique in the sense that, you know, relatively speaking, we are as easy to work with in the sense that we can address the entire system. We can actually co-develop with the customer. We can deliver it as a service.

There is no AI project in the world that is based on a single piece of technology. It just doesn't exist. It's a storage, a networking, a compute problem. It's an integration problem. By the way, it's also a security and a trust problem. If you're going to implement AI in critical infrastructure, for instance, and you're going to use it to control the power systems, that control the power grid, guess what? What you run it on has to meet certain security specifications. It has to be able to operate potentially in a Zero Trust environment, which we just launched Project Fort Zero to go address that issue.

You know, we find ourselves not only having the breadth of technology that is equivalent to almost anybody else in the industry and definitely bigger than any other non, you know, cloud service, but we also have, you know, the ability to deploy that technology in almost any topology the customer wants. You know, remember, we're not anti-cloud. We work with the cloud. In fact, Jeff is building lots of software-defined offerings that sit in the public clouds. If you want to do it, they're great. We can help you do that. If you want to do it on-prem, we definitely can do that. If you want to do it at the edge, we can do that. More importantly, if you want to do it in a multi-cloud hybrid system, we're almost unique in being able to do that.

If you follow the narrative. Almost every large-scale AI system in the world is trending towards becoming a distributed architecture that will be hybridized, inferencing at the end, trading in the core, and that puts us in a pretty strong position, which I think customers see that. They don't want to deal with a thousand companies. They want to deal with an expert that can actually address real-world AI. Not only do we have the product breadth and the one throat to choke, but we also have this multi-cloud strategy and the ability to be essentially able to exist in whichever topology you want that piece of the AI system to work in. Anyway, I think we have some pretty good assets there, and we do struggle to find traditional competitors that can do that.

That isn't really something that most of our traditional legacy competitors do, that most of them are about a single product or a single part of the solution, which I think gives us an advantage over the, you know, our traditional competitors, and our openness gives us an advantage potentially over even the hyperscalers in terms of how to navigate this.

Sidney Ho
Equity Research Analyst, Deutsche Bank

Great. Thanks very much.

Paul Frantz
VP of Investor Relations, Dell Technologies

Appreciate it. We're going to take, two more questions here.

Operator

Our next question is going to come from Aaron Rakers from Wells Fargo. Please go ahead.

Aaron Rakers
Managing Director and Technology Analyst, Wells Fargo

Yeah, thanks for taking the question. I think this question is going to build off of the prior question a little bit, is that, you know, the complexity involved and, you know, Dell's expertise. One of the areas I think a month or so ago, you announced Project Helix with NVIDIA, and part of that stack strategy was leveraging the AI enterprise software suite that NVIDIA offers. Really, I guess my question is, as you're engaging with enterprises, given their own expertise, is a software layer for AI embedded in enterprise infrastructure, is a requirement? Are they needing a layer like enterprise AI software suite from NVIDIA, or are there alternatives in terms of how they're developing their own internal AI strategies? Thank you.

John Roese
Global CTO, Dell Technologies

I'll take a stab at that. No, there isn't a universal software layer that's going to materialize. AI is a very diverse area where there are many different use cases. Like, if I'm building out an AI system to automate imagery for quality assurance in my factory, the software that I'm going to use to do that, and even the models I use, are very likely going to come from kind of industrial-centric OEMs and partners, people that we work with in that space. If you pivot over to building a chatbot, which is very popular right now, as you know, with the large language models, you'll go and find the best tool chain to do that.

The NVIDIA software is fantastic because what they've done is they've created a collection of base models to address a number of use cases, we think that, you know, in terms of speed to execution, the advantage the customer gets by maybe starting their chatbot design with NVIDIA is that they get a ready-made model, they get a turnkey architecture, they get a system that can actually accelerate it. They can do it on their data, in their data center, under their control, so they avoid a lot of the regulatory and compliance obstacles. That's great. If you take that exact same architecture and went after, you know, some very specialized industry-specific AI model, the NVIDIA software might not be the best choice.

Maybe it is, maybe it isn't, but we expect it not to normalize to one kind of monoculture about all AI projects are delivered via the same software framework, just like they're not delivered by the same model. In fact, we love the fact that we're seeing incredible expansion of the number of large language models that are available, because each of them do things in different ways. Some are more efficient, some are more performant, some are optimized for multimodal versus single modal. You know, I think, you know, our view pragmatically is, we need to have things to get people started, and absolutely, the NVIDIA software stack does that, and it does it for a bunch of very important and useful use cases.

Over the long term, that's one of many tools that the customer is going to use to make sure that they can execute their AI projects across their very diverse set of functions in their enterprise. The interesting thing is, even though there's that diversity in the software layer, they got to run it on their infrastructure, and it's definitely in their advantage for that infrastructure to be highly reusable, standardized, and common, which is really where we play. Software complexity will probably continue to be high for a reason, because of the diverse set of use cases.

Hopefully, we can normalize infrastructure complexity and make that simple, but we don't expect there to be one master software suite for all things AI anytime soon, even though NVIDIA is a fantastic way to get started, and it will address many of the use cases in a very easy way for customers so they can move fast.

Aaron Rakers
Managing Director and Technology Analyst, Wells Fargo

Very helpful. Thank you.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thanks, Aaron.

Operator

We'll take our last question from Mike Ng from Goldman Sachs. Please go ahead.

Mike Ng
Managing Director of Global Investment Research, Goldman Sachs

Thanks for the question. I appreciate it. I was just wondering if you could talk a little bit more about Dell's go-to-market for generative AI from a, you know, product perspective. It sounds like it's leaning more on probably Project Helix today, and then over time, it'll be powered servers with other types of compute. You know, what are you going to do from a, from a networking perspective? You know, are there any gaps in the current portfolio that you need to fill with other partnerships or through, you know, incremental R&D and proprietary networking? Thanks.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Yeah, sure, Mike. I'll start, Jeff can come in at the end here. Look, our AI offer doesn't start with Project Helix. Our AI offer starts with a broad storage portfolio that runs at massive scale-out architecture, particularly structured and unstructured data, as we talked about, a data protection, a portfolio that can protect those valuable models, we can scale that out to the edge. We purpose-built on our last generation of servers, our sixteenth generation of servers, a purpose-built AI set of servers. We talk a lot about the XE9680, let's not forget about the XE9640 and the XE8640 and the R760xa, great inference machine. You'll see us continue to build a broader set of AI offers across Jeff's business.

Storage, compute, John talked about partnering in all of the different fabrics that exist. We work with all of the different fabrics. Obviously, we know a little bit about Fibre, know a little bit about most of the interconnects, since it interconnects our storage subsystems. We'll continue. We know how to cluster. We've been building multi-cluster designs for a long time. That's part of our offer. I think what's an equally important part of our offer, which we try to hit on, is the service capabilities. Our ability to help customers ultimately size, characterize, what their AI needs are, how to help them with their data and get their data where it needs to be, what workloads can be accelerated, and then ultimately delivering the service. I love John's word earlier, easy. Project Helix is an example and a particular type that is makes it easier.

It helps enterprises scale, design, build, and deploy AI systems, combination of our capabilities and NVIDIA's capabilities, to be able to use customers' proprietary data to build their models and look at what's happening in the open source world and how fast this is moving, and how to tap those open source model communities with their libraries, your data sets, the transformers that exist there, that allow you to really take advantage of this capability very quickly. That's what we're trying to help our customers through, is great interest. The four use cases that I talked about in the opening comments align to our portfolio, and helping them go fast and making it easy, I think, is a good way to at least my comments to end. Jeff, anything you would add to that?

Jeff Boudreau
President of Infrastructure Solutions Group, Dell Technologies

You already, you know, you hit on our strategy in regards to AI in, on, for, and with, which I think is critical to everything you just said. If I think about modern AI stack, it really has three layers, right? Which is an infrastructure layer that's both hardware, software, and OSes. There's a platform layer where a lot of the tool chains plug into, and then there's the application layer. I think that creates a tremendous opportunity for us to, I guess, go to market and win in the hardware layer, the software layer, and the services layer, where Jeff was a few moments ago. That's the only thing I'd add, Jeff.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Perfect. Paul, to you.

Mike Ng
Managing Director of Global Investment Research, Goldman Sachs

Thank you very much.

Jeff Clarke
Vice Chairman and Co-COO, Dell Technologies

Thanks, Mike, for the question. We appreciate it.

Paul Frantz
VP of Investor Relations, Dell Technologies

All right, thanks. We'd like to again, thank Jeff and John, and appreciate the question, and that wraps up today's call. We'll see you next time for Q2 earnings.

Operator

This concludes today's call. Thank you for your participation, and you may now disconnect.

Powered by