Session started. My name is John Thornton, I'm Citi's Industrial Specialist, and it's my pleasure to be joined with the team here from Vertiv. We have both the CFO, Craig Chamberlin, and the Chief of, Chief Product and Technology Officer, Scott Armul. I got a bunch of questions here. We will open it up to the floor a couple times to let people jump in where they feel, you know, best. But I'll get it started, and you know, maybe in about 10 minutes or so, we'll open it up as well. So, Scott, maybe we'll start with you. As Vertiv's Chief Technology Officer, I think we have to start with: How do you stay ahead on technology in such a fast-paced, changing technology world, serving data center customers?
You've talked about your one-stop shop in the past in terms of your ability to offer thermal and power management, as well as global service offerings that allows you to do, to offer an infrastructure solution that starts with design and ends with service. So could you update us on the one-stop-shop ability, and how is this manifesting itself in the market today? And then where are you pressing NPI most to stay ahead of competitors?
Yeah, rich question. And thanks for the opportunity. From a Vertiv perspective, we've always prided ourselves on being very close to the customers and the technology partners that are really driving the industry. Obviously, we've been very vocal about our closeness and our partnership with NVIDIA, but also data center customers, hyperscalers, wholesale builders, on kind of what the needs are and where things need to go. And working on joint developments, joint architecture discussions, kind of debating around the needs for densification. Where does the portfolio need to go? What are the right architectures within the data center space for us to really unlock and enable kind of the performance and the optimization of today's and tomorrow's GPUs and next-generation chips?
So that customer closeness and that customer intimacy, and kind of organizing ourselves to be able to have technology folks kind of outbound and working within the space and within the industry, I think has been very powerful for us. I think a lot about... From a Vertiv perspective, we focused a lot on an evolution from kind of point products and point solutions to one of system-level thinking and a solutions orientation, both in how we think about the interoperability of our products and technologies, as well as kind of how we think about delivering them. So our infrastructure business unit, infrastructure solutions business unit, is really oriented around deploying white space solutions, thinking through turnkey data center solutions, like our Vertiv OneCore product.
That gives us an interesting lens of seeing how all of the pieces need to fit together in the portfolio, which is helpful for us to then take the customer context, take the customer data points, and then we're also kind of a designer and builder of a turnkey data center solution. It informs what the point products really need to look like. It better informs and outlines kind of what the interconnects and what the sizing of the blocks and the stripes for a data center to work optimally really need to look like. And then, ultimately, it points to where the physics challenges are going to be. Where do we run out of capacity in a busbar? Where are the connector problems? Where is the physical limit of a single-phase direct-to-chip type of a deployment in a next-gen GPU architecture?
Then how do we work backwards and more proactively on our own investments and our own technology cycles to make sure that we're looking kind of then customer in and technology out for where we need to take our portfolio and where we need to evolve and fill in some of the white space in order to make sure that our complete portfolio kind of stays a complete portfolio as architecture continues to evolve.
You know, we don't want to front-run the Investor Day, but obviously, there's a lot of questions on whether Vertiv's opportunity is increasing in high-density compute versus the 2.75-3.5 million megawatt combined opportunity you updated us at your last Investor Day. You know, especially in light of 4Q's orders. So maybe the best way to ask the question could be, you know, go over what has changed versus your expectations from November of 2024. It would seem like high-density compute has moved faster to proliferate, and given increasing complexity of systems, redundancy in power and thermal is even more of a requirement than first thought. And then Vertiv hasn't sat still either, adding PurgeRite and Great Lakes, for example.
So when you're getting orders, is it at least safe to say that you tend to be at the high end of that prior range or, you know, maybe is it even higher? I think the more color, maybe some more color on what you're seeing there.
You wanna start, or you want me to start?
Sure, I can start. Generally speaking, like, we're at the higher end of that range, the more share of wallet we have, and when we're delivering turnkey solutions like I just talked about, whether it's a SmartRun within the white space or a OneCore type of a solution, we tend to be at the higher end of the range, just in terms of share of wallet and total content. I think you rightfully pointed out areas of the portfolio where we're continuing to expand, and obviously, from a year and a half or two years ago, liquid cooling was in some of our orders.
Now, liquid cooling is kind of part and parcel to most of our reference designs and most of our customer engagement when it relates to AI deployments and AI data centers. And then you pair that all together with kind of the increasing scope of some of the acquisitions and the portfolio expansion that we have done, whether it was Waylay, kind of on the controls and the service perspective. Whether it's PurgeRite, on kind of looking at secondary fluid network and a higher degree of commissioning services, or even Great Lakes within a rack environment.
Improving out maybe the position of that portfolio helps to kind of expand the share of wallet, and I think is informing a lot of maybe deeper and greater content as we deploy orders for some of these larger sites, but, Craig, go ahead.
No, I think we'll, we'll, we'll probably dive a little bit deeper into it at the Investor Day. But Scott brings up a good point, and it ties back a lot to his original question, is the system-level architecture discussion that we see going on, and not even going on, but starting to manifest itself in orders, naturally gives you the opportunity to get more share of wallet, and we like that. We like complexity, we like when customers come to us with problems, we like to develop solutions for them, and that all starts with that system-level thought. And when we talk about point-level products, that's where you start. You start at being able to do, you know, one point-level product, but then we like to stack on top of that.
So when a customer comes in, and they're like, "We need a power solution," well, we want to be able to sell them the whole powertrain. And when you introduce that whole powertrain, you introduce a higher ability to get into the share of wallet. So while it still moves around, and on average, it's in that neighborhood, as we get more orders coming to us, and they go through the pipeline, and we're able to make them more of a system-level order, we will see that push to the right, and that's something we like, and we, we believe is a competitive advantage for us.
You mentioned orders. That actually works well for the next question. Obviously, Vertiv just moved away from reporting quarterly orders-
Yeah.
... and we understand you don't want to talk too much about quarterly orders expectations.
Mm.
But I think the issue is that Q4 orders were just so much higher than we've seen in the past, and the questions we have been getting from investors is, Why didn't we see this coming? I know Gio said Vertiv's orders should grow again in 2026, which I think means more than $18 billion, which was approximately your 2025 orders. But could you give us a little bit more perspective on how Vertiv thinks about the market, given your pipeline, as said, was much more than filled, despite Q4 orders? Do you see the order cycle still continuing to grow for several years to come, and is it fair to say we could see more quarters like Q4, or even bigger than Q4, going forward?
I'll break that up into a couple sections, and then I'll pass it over to Scott for some more context. But one, we really enjoyed the outcome in the fourth quarter. Of course, we did. Everybody, you know, everybody saw it, and it was a manifestation of what we saw during the year of opportunities building, and it was these system-level opportunities that we were, you know, shepherding along, while in the background, you also saw other opportunities come in. So while the pipeline did execute, and executed well in the fourth quarter, you also saw it kind of refill it back in, in the spaces in which we executed those orders.
So when I take a look and I take a step back, and I say, "All right, where do I see this going in the future?" I'm not gonna comment on, you know, 2025 or 2026, or 2026 or 2027. In 2026, yeah, we see that there's opportunity to continue to grow order book, and that's from the pipelines that we see and the triangulation of what we hear out in the space of people spending CapEx. We know that the CapEx is coming online. We see it in pipelines and opportunities, and we believe that will be executed without... within 2026. If that happens, then obviously, we would see an order book that was very strong. The size of the $8 billion quarter, a lot of that's phasing.
I mean, some of that came within December, and it could've easily fell over to January. Do I think every quarter from here on out's $8 billion? Probably not. I mean, I know I'd love to be able to say that, but no, it's not gonna be... There's always gonna be shifts and movements, and that's why getting away from the quarterly order reporting and forecasting where it's going is important to us because it is a very dynamic situation out there, and you kind of got to be at least somewhat fungible to moving around a little bit. I mean, last year in EMEA, the second and third quarter were very low on orders, but the pipeline was strong.
So you heard Gio say a lot of times, "It's a coiling spring, it's a coiling spring, it's a coiling spring." Well, we could see that 'cause we saw the pipeline. Everybody else just saw the orders being down. Then in December, that spring became a little uncoiled by the execution of the orders were very strong in the fourth quarter in EMEA, and that influences when we talk about next year's revenue growth. The second half of next year's revenue growth is really driven by what we saw in the fourth quarter, but we also saw another robust pipeline.
So, thinking of it that way, what we see in front of us and the opportunities, I would say we feel very good about where we sit today, and we feel very good about the triangulation of what we're hearing in the market of what people want to be able to go and deploy.
Mm-hmm.
I don't know, Scott, do you have any...?
I think you're good.
Yeah. Maybe, I guess, along the lines of the topic there, can you give us a little more color, maybe, on demand by region? I would assume the $8 billion in orders in Q4 were still majority U.S., but I think you were trying to get some more perspective maybe on EMEA. You mentioned the coiled spring.
Yeah, yeah. Strong Americas, as we've seen, and everyone hears the deployments in Americas and the ongoing outbuild of the data center market there, and the appetite for AI. So we feel that was a very strong quarter for us in fourth quarter, and we see the pipeline very strong for 2026. EMEA was a little bit of a proof point for us. Fourth quarter was a jump in orders. Felt very good.
The pipeline continues to be strong, and we, we think the line of sight there is that it's been a bit of a pent-up demand, almost a delay in the reaction to the market that we feel like is snapping back a bit, and we expect to see that pipeline convert in 2027 or 2026, and we'll see how that plays out. But again, fourth quarter was a pretty good proof point there. APAC, it... Rest of Asia, India, very strong pipelines, very strong execution. China's kind of a unique bird. We see a pipeline. We don't see it executing very well. It's a little bit slower than we would've expected.
You felt that in kind of the fourth quarter in terms of some delay in orders, and we're kind of still feeling that one through. But, you know, I'd say India and rest of Asia, we feel very strong about those two, those two spaces.
Maybe could you just talk a little bit more, maybe what you're seeing on the legislating permitting standpoint, as well as what you're hearing from your main customers as they think about the infrastructure build-out? I guess you kind of addressed some of that, as it relates to, you know, EMEA, but I don't know if there's anything further legislative-wise that you wanted to touch on?
Yeah, I think from a data center perspective, and a lot of global focus on, one, power availability obviously dominates conversations, time to get to interconnect, what are the requirements to actually connect to a grid, what are the requirements for take rate and other things? But we're also starting to see, with some of the emerging requirements and regulations at a grid level from what has happened, like with ERCOT in Texas or some of the data center activity happening in Spain, maybe more scrutiny on what larger AI data centers, what their impact to the grid and how they behave in terms of grid interoperability, is creating some new and renewed focus on power architectures and data center designs.
A lot more renewed focus on, especially in the U.S., what does on-site power generation or, or the concept of bring your own power really look like in the concept of either accelerating a grid interconnect time, enabling a bridge power type of a solution, and how do you pair that together with inherently what are higher density, more dynamic, more synchronous types of loads from AI data centers? It creates maybe a, a fertile ecosystem for new architectures to be thought through, new product solutions, new portfolio solutions that can help customers solve those types of problems.
Introduction of battery energy storage systems, introduction of different types of power control and power algorithms within our portfolio and our product set to help unlock even some of the economic use of the data center power capacity, and potentially energy storage capability, as a way to not only work around regulations and maybe some of the things that would delay data centers being turned on, but become a better grid partner and a better grid citizen, so that data centers and utilities, whether that's in the U.S. or globally, are working more symbiotically and working more in concert.
A lot of our focus from a portfolio, and a development, and a capability perspective, on what we're trying to build out is almost an enablement of that ecosystem, to act as the infrastructure glue and some of the control intelligence to enable customers, to think about on-site power generation, to think about overcoming some of these regulations in a way that makes it less of a burden and more of a, more of a kind of a, national asset.
I think Scott brings up a really good point there in terms of when the regulation is something that becomes a solvable solution from a systems standpoint, we love that. It's a complication that we like to be able to design with the customer in terms of an outcome and a solution. When you're a point provider, and you're providing one piece of equipment, you're allowing the, you know, the infrastructure build-out to happen around your part, and it's good 'cause you can deliver that part. But when you're a reference architecture, and you're a reference designer, you're now creating this symbiotic relationship that you are developing the solution that is the go-to whenever the regulation or that hurdle arises.
So, it's back to this, I'd say, strategy and proof point that we put out a lot of, if you start to think about the system-level architecture, you're going to be kind of the design driver and the technology driver, and that's where we want to be, and that's kind of what we believe would be our differentiation when you look at the space.
Okay. That's kinda heading where I was going next, was products and the roadmap kinda going forward. So, maybe, Scott, let's start talking on the power side first. The upcoming shift to 800- volt architecture has drawn a lot of attention, and Vertiv has stayed at the forefront and is planning on releasing its 800- volt portfolio in the second half 2026. Can you walk us through what's fundamentally driving the industry towards that standard? And not just the incremental efficiency gains from moving to DC, but also whether it's hitting practical physical limits with traditional AC power distribution as rack densities continue to increase.
Yeah, I think that's the name of the game is as densification increases, as chip power and heat levels continue to rise, as we move in NVIDIA nomenclature from Blackwell into Rubin, to Rubin Ultra, and beyond, we start to hit some practical physical limits of how much copper, how much busbar, how much distribution you can put into a single or a double rack space. In a traditional kind of 50- volt at the rack level architecture, you run into limits and space constraints, and challenges with heat, that by moving to an architecture like 800 volts DC, you solve for a lot of that. You get the inherent improvement in efficiency, you get some improvements in reliability, but you bring with it an architecture change.
And from a data center space that is typically grounded in tradition and comfort in certain types of designs, and you have your reference architectures, a shift of that magnitude is significant. But we've been pretty vocal in trying to lead the charge and help shepherd a lot of the ecosystem participants, that this is something that we can unlock and enable from a safety, from a compliance, from a power delivery, and a power architecture perspective, to help truly optimize and enable performance at a GPU level, especially when GPUs start to move towards native 800- volt DC. And we wanna do it in a thoughtful way, so we've been working on a portfolio.
We want to make sure that we enable an end-to-end offering, so this isn't just we're depending on multiple piece parts, but we also understand it's not gonna be a binary shift-
Mm-hmm
... from traditional 480-volt AC architecture in the US to all of a sudden, all data centers are gonna be 800-volt DC, period, full stop. This will be kind of a hybrid, evolving, flexible type of an architecture and an ecosystem, where we fully expect to see data center sites that are built based on a 480-volt AC architecture. We deploy 800-volt DC in more of a contained or sidecar type of a deployment to enable and unlock some flexibility. If we have a GPU that is going to run natively off of 800 volts DC, we can provide that power conversion and that distribution through an 800-volt conversion box.
If you have traditional or legacy loads that are in that same data center, you can leverage the traditional 800-volt DC architecture, and then down the road, you may have a deployment that needs to happen now that we want to be ready for 800 volts DC, where we can still leverage kind of in-rack 800-volt to 50-volt conversion boxes that potentially then go away longer term.
All of that to say, like, I think it's going to be an evolutionary progression, but we want to make sure that from a product portfolio perspective and an enablement perspective, we're unlocking the ability to power chips at this level because we think, in working with some of our partners at a chip level, that is the key to unlocking new thresholds of performance, that unlocks new capability and usefulness of the AI models and the training and the inference that, that come along with these progressive generations of GPUs, and we want to ensure that we're not kind of-- not only are we not the bottleneck, but we are the enabler of being able to switch on those types of deployments and architectures.
And then I guess maybe within the 800-volt architecture, you know, obviously, backup power is still needed, but how does the role of UPS change, and how is Vertiv positioned?
Yeah, and I touched on that a little bit, as we talked about kind of the evolution of power infrastructure moving more upstream to kind of enable on-site power generation and other things. But generally speaking, maybe I'll tackle that question in two parts. A traditional centralized UPS serves a lot of functions. It's a power conditioner, it's a switch to backup power, it's autonomous and battery to kind of ride through issues that may happen with power quality. It helps preserve and insulate upstream noise and harmonics and other distortion from being created downstream, and it helps form a very good, high-quality power downstream to the devices itself.
GPUs and AI data centers in general, very simplistically, like, they're dynamic loads that operate in sync, and that creates very interesting challenges for upstream power infrastructure and battery autonomy. Energy storage will start to move closer to the white space-
Mm
... whether that's CBUs or BBUs, battery backup units or capacitor backup units, to help with power smoothing, to help with kind of buffering some of those load volatility and load fluctuations. At the same time, based on everything I just talked about relative to kind of site-level grid interaction, we see a lot more of an interest and a desire for energy storage upstream, more at a utility level, to prevent or enable low volt ride through, to prevent sites from islanding or having to disconnect, to have a source where excess load or grid capacity can be put into a data center without having to, to disconnect. All of those things kind of move in a barbell fashion, maybe where the role and where the capability and intelligence of a UPS may sit.
We're going to see it move closer to the White space, and we're going to see it move closer upstream towards medium voltage in the substation. While the role in terms of managing energy sources and filtering and conditioning power, and maybe most importantly, driving control schemes and control mechanisms that enable power smoothing between the upstream and the downstream in an intelligent fashion to unlock better performance, to turn the data center assets into, into things that can be used to unlock economic benefit, the role of the UPS, I think, is more important than ever. It's just potentially moving into different parts of the power chain, both upstream and downstream.
I think I got one more, and then I'll open it up, you know, to the audience for any questions. Shifting over to the thermal management side, maybe we can start by addressing some of the discussions post-CES that pointed to Rubin being able to use warmer water instead of needing for water chillers.
Oh, great.
From your viewpoint and discussions with customers, what are you seeing? Is this a material design change, or is this more, on the lines of next gen chips bring about some efficiency gains, but hybrid thermal chain infrastructure remains?
I love this question.
Yeah.
The 45-degree C water question, and the world kind of went nuts for Jensen's comments at CES. From our perspective, interestingly enough, like, that's not necessarily a new comment-
Yeah.
... from NVIDIA, and even the two prior generations of NVIDIA chips have been able to and have been kind of encouraged to run at warmer water temperatures, up to 45 degrees C from the CDU delivered to the chip. The reason behind that push to higher water temperatures, very simplistically, is the higher water delivered to the chip means you can do more free cooling with dry coolers and heat rejection in a data center environment. If you are eliminating more mechanical cooling, you are inherently freeing up more peak power or more power availability from the site to be redeployed back to GPUs because you don't have to account for it on kind of the thermal management and the heat rejection side. So it's an admirable and advantageous push to kind of try to drive in that direction.
That being said, in order to actually reject heat effectively, 24/7, 365, in a dry cooler or a mechanical-free type of a heat rejection environment, there aren't that many locales or locations that can enable that-
Yeah.
... and can handle that when you think of, like, ASHRAE design conditions, and you think about the environment. So typically, like, the push for that is to maximize kind of power availability for GPUs. But in practical applications, if you want to run at colder temperatures or if you want to deploy in areas that aren't, generally kind of more Arctic and northern climates, the heat rejection still needs to happen-
Yeah.
... and still needs to go somewhere. So if it's a dry cooler, that still fits within Vertiv's portfolio. But maybe more importantly, there are going to be peak times throughout the year. There will be ambient environments where mechanical cooling is still needed in order to keep those operating conditions, or mechanical cooling will be needed and necessary if you want to run at lower water temperatures. So from our perspective, one of the products we developed and launched last year is the Vertiv Trim Cooler, where we think it's actually the best of both worlds to tackle exactly this problem. You have effectively a very large dry cooler paired with a smaller mechanical chiller in one package, that allows you to run at much higher capacities in a much smaller footprint.
You can run in free cooling, and you can take all of the benefits of running with that warmer water, and then for those limited times throughout the year where you need to trim or you need to run mechanical cooling, you can turn it on, and you have it available. So you can kind of exercise both sides of that, where you're getting the benefit of warmer water without maybe the full commitment, and you're enabling yourself to have a data center design that's more flexible if you want to deploy multiple GPU generations, or you want to run it cooler water for a variety of reasons that somebody might want to do that. So I think it just further complicates what is already a fairly complicated story on kind of data center heat rejection.
From our perspective, as Craig said, like, we kind of love the complexity, because we envision forward data centers as hybrid environments of chillers and dry coolers and other heat reuse and other technologies that will all be blended together to optimize for what our customers are trying to solve for in the end.
Gotcha. Let me maybe take a moment here to pause, see if there's any questions from the audience?
Hey, thanks for the time, Scott, and thanks to making time as well. I just wanted to ask, like, on the conference call recently, Gio mentioned that the installed base was a big opportunity for service. I just wanted to get your understanding of, like, with PurgeRite and the other acquisitions that you've done, what do you see as the, like, yearly dollar amount you could access in terms of service per installed megawatt you have? Is that, like, on the order of tens of thousands of dollars, hundreds of thousands of dollars? Like, how much does it cost to, like, refill and clean a loop that PurgeRite does, depending on the size of a data center? You guys have one of the largest service networks in the industry. I think it's 4,400 technicians.
Like, you, I'm sure, can get a lot of value out of that, and how are you looking to monetize that?
I'd start by saying we don't, like, we don't put a reference point out there of how much dollar per megawatt hour we get on a services basis. There's a couple different reasons why, but we're always trying to, I'd say, wrap our arms around our install base and grow that services market. What we've seen over the last three years is steady growth in the services market and the services output and revenues, which we feel is a very strong proof point of us being a provider of choice when it comes to services. So the goal is to continue to grow that and grow that through different levels of services.
One being, you know, our typical model, which is a services model for life cycle, where we're at the data center, we're providing running maintenance and, and services, break model fix, and then you have the PurgeRite, which is a specialty solution. And then it goes on even further beyond that with, like, Next Predict, where you're involving the, the way that you operate the data center and looking at it from more of an optimization standpoint. So for us, it's continuing to see that services, what I'm going to call revenue and recurring revenue, grow on a regular basis. Now, if you look at the face of our financials, it's hard to tease that out because the OE side's growing so fast.
What I can say with, you know, pretty, pretty good conviction is we do see a very strong growth in our services market, and what the goal would be and the strategy would be is to continue to outpace where everybody is in the market for that services portfolio. 'Cause at some point, as the OE slows down and as the OE becomes, you know, more restrictive, you want to be the person that's growing services faster, 'cause that's gonna be your growth engine. So right now, we're deploying a massive amount of install base, and we want to be the first to market to be able to service that. We feel like we have a pretty good stranglehold on that. Now, again, PurgeRite's an add-on there, Next Predict's an add-on there, and that's what we want to keep continuing to do.
Ultimately, the goal would be to see that continue to grow, and then it become a larger portion of your reported revenues. But that's not gonna happen in this environment we're in today, or at least not for the next foreseeable future, because the OE is growing so fast. Yes.
Thank you. I just had one quick more question, we've seen that ABB recently invested in DG Matrix. Eaton acquired Resilient Power recently as well. I feel like the industry, and Delta as well, is moving towards this idea that solid-state transformers will be essential when we eventually move to 800-volt DC architecture. I know that you guys, as far as I'm aware, currently don't have an offering. I just wanted to understand, like, where you sit on that and what you think you will need to add to the portfolio to have that comprehensive offering. Is it something you could develop internally?
Want to start?
Yeah, I'll jump on that. Certainly, as I kind of alluded to, maybe some of the capability of the UPS moving upstream, integrating medium voltage and best capability together, I think there's a multitude of different possibilities as far as, like, where the architecture settles and what's the right thing, whether it's an 800-volt output or whether we move to a 1,500-volt output. I think there's a lot of things up in the air and a lot of architecture still to be debated. That being said, I think one of the things that we are seeing from a market perspective is there's interest in seeing that move towards kind of power conversion from a solid-state transformer perspective, and I think that the term solid-state transformer often gets maybe misused or, or there's various different meanings for, for the same terminology in the marketplace.
But I think, that can be both, an organic pathway, using and leveraging our kind of core competence and capability in just general power conversion, both at the high voltage and, and, and low voltage level. As well as kind of, we're coming from an 800- volt DC, heritage, where we're not starting into the DC power space, inherently or, or without, some capability that goes back many years. Obviously, leveraging our capability in telecom and leveraging our, our long-standing history, in the DC power space.
It's a business unit and a line of business at Vertiv and Emerson that I used to run at one point, so it's very near and dear to my heart. So seeing how all of these pieces start to triangulate and the fact that we're talking back to DC and the data center is an encouraging development to me. But maybe a long-winded way of saying to come back and say, like, certainly we look at technology pieces in the portfolio and what could help us from an external market perspective. But it's absolutely something that we're thinking about and considering in terms of our organic technology development and pathways as well.
There's one more.
Thank you.
Is this on?
Yep.
Hi. Wondering just in terms of your conversations with clients, I'm curious as to where they stand in terms of prioritizing, power storage versus backup power generation, and which is, like, more effective, more available, more efficient? How, how are those conversations going? I'm curious. Thank you.
Yeah, generally speaking, we try to stay out of the debate of should you buy gensets or deploy with gensets or batteries, or all of those other things. But I, I do think from a regulatory perspective, from a, I'll call it NIMBYism or not in my backyard perspective, there's a lot of pressure just on leveraging of, of, of gensets in particular. You contrast that with now we're talking about on-site power generation and reciprocating engines and natural gas turbines potentially being deployed on site, so there's, there's kind of a, a weighing of both sides there. But I think deployment of, of larger scale energy storage has some inherent advantages, like I just talked about, beyond just backup and battery autonomy, beyond just, ability to switch over and, and kind of ride through issues.
It unlocks and enables some of the economic use, time of use, peak shaving, other types of things for the data center asset. It enables maybe a higher degree of control and coordination and responsiveness, that, that generators can't handle on their own. And so I think a lot, a lot of the reasons we're seeing energy storage become more of a prominent part of the conversation is because of those multiple variables and those multiple potential uses for battery energy storage systems, as opposed to just standby power.
Okay. Maybe spend a moment talking about competition. Liquid cooling is expanding. Or it's expected to continue scaling globally at an extremely fast pace. Dell'Oro has talked about the market reaching $3 billion in revenues in 2025 and surpassing $8 billion in 2030. This has caused a rise of new entrants, particularly in APAC, where we still hear some investors' concerns that smaller suppliers could drive pricing pressure. Vertiv has obviously expanded its liquid cooling footprint quite significantly since the CoolTera acquisition, and more recently with the improved service capabilities from the PurgeRite acquisition. How do you see Vertiv's competitive differentiation in liquid cooling as more suppliers enter the market?
Could you update us on whether you still are expanding liquid cooling capacity after the 40x expansion that you've had over the last couple of years?
You wanna start?
Sure. Maybe I'll start with the last part. Yes, kind of still investing and still thinking about capacity for liquid cooling. It's becoming more and more a prominent part of the discussion. From my vantage point, as I think about the competitive landscape, and yes, it's attracting a ton of attention. Yes, it's going to continue to attract a ton of new entrants, and I think a market that's growing like this and that is already this big, we would fully expect that to be the case. From a differentiation perspective, I'll lean back and point back to more of the system level design and thinking.
Integrating a CDU with a piece of infrastructure like a Vertiv SmartRun, where you're incorporating the secondary fluid network, you're able to add intelligence and a certain aspect of control to manage pressure and valve control and other things, I think helps to differentiate this, the CDU as a core part of a bigger offering, as opposed to just a device that can move fluid. The other part of kind of how we view the competitive landscape is you can buy a CDU from anywhere-
Mm-hmm.
... but scale matters, and experience matters, and maybe it seems a little funny talking about experience, just given how kind of new and, and nascent the market still inherently is. But when you're talking about a 100 MW site that's going to be liquid-cooled, you have to be able to operate at a significant scale. You have to have capability and technical expertise and know-how to be able to, from a services perspective, help flush, fill, turn up, set up, and orient the product in itself and the secondary fluid network.
You have to be able to troubleshoot, and you have to be able to help customers get comfortable with that scale and that size of infrastructure, where candidly, like a new entrant to the CDU space or somebody that doesn't have that breadth or that capability or that ability to meet the customer where they are in that scale, is going to struggle tremendously.
Mm.
So the product has to be good, and it has to perform, and we want to have the best-in-class product, but it very quickly turns to a capacity and experience and an expertise discussion with our customers as we start to talk about ramping up liquid cooling. And, that's where I think we can have a different conversation than some of our competitors.
I think you asked a question about how it's evolved for us. I think it's evolved for us exactly that way, where CoolTera was an entrance for us into the liquid cooling, and we scaled it, and I think we learned a lot, and we started to become more integrated in how we thought about that as a system level again.
And then, how would we address that from a design perspective, a services level perspective, and an outcome perspective for our customers? That evolution point providers still haven't went through. And I think the point providers are, are people that you're seeing in some of those regional spaces, that they do win on price, but you can only win on price for so long. At some point, the proof's gonna be back to: how are you delivering for your customers? And I think that's where we, we lean on our competitive advantage there.
Thank you. I'm gonna jump to, let's talk some incrementals. Craig, I think you were pretty clear that 30% incremental margin is the right framework to think about in 2026 and maybe 2027, as well as partially giving you what to fund growth investments. But the old mantra at Vertiv used to be to maintain fixed costs and then scale the business, which you are obviously doing, and you were also suggested you would remain in the green in terms of price versus cost, especially given potentially easier tariff comparisons as 2026 evolves. So why would or why couldn't there be some potential tailwind behind the incremental margins if Vertiv executes well? I'll-
I think the execution, well, is something that we point to. We wanna be sure that we execute well. The space that we're working our way through, again, price cost looks like it's going to be, you know, neutral, maybe positive for us going into next year. We feel really good about fixed cost leverage. But when I come back to fixed cost leverage, one, we're scaling for capacity to be able to be out and win in the commercial market, be out to support the technology development that we need to be able to be out there in front of, which is a little bit more than, you know, we typically have had in the background.
We are growing in that space, but then there's also just the coming on of the brownfield space that we're expanding out and the greenfield that we're putting in, in Asia and some other spaces. So that inefficiency, naturally, as you ramp up, if you can squeeze that inefficiency down, you would see, you know, typically, your margins continue to march up. Now, ours will go up again this year. We ended this year at 28%. We're kind of facing 29%-30% next year, and that's what we feel good in. Now we're starting first quarter at 28%. So, I think it's just being diligent around what we have to be able to go and deliver. We're very proud of where we've been able to push margins, and we're not gonna stop here.
You look at our long-term framework, we have a plan to continue to grow that, and I think we'll give more, you know, more thoughts on that as we come back in our Investor Day. But there is not an intention on our side to stop at any point in time. We believe that we're gonna continue to execute and continue to drive there. This is what we see right now based on 2026 being a little bit of a different year from a ramp perspective, but still margin accretive for us.
Got time for one last one? I'm gonna sneak in here that we're asking everybody: is what are the top two or three innovations and structural changes affecting your company over the next five years? And are there any emerging industry trends that are perhaps being overlooked in the current discourse?
Here we go.
Covered a lot. You got 20 seconds.
I was gonna say, we covered a lot of that.
Yeah.
For me, very personally, with Vertiv, just the pace of technology change-
Yes.
... and keeping up with it,
Pretty dramatic.
... probably the thing I, I'm most focused on. When we're on a kind of a 12-month GPU development and deployment cycle, just the rate of change of this industry and keeping pace and, and how we can articulate kind of keeping infrastructure two GPU generations ahead, is the thing that we're trying to maybe change the foundation of our engineering approach, our technology approach, and the way in which we're, we're engaging customers, to ensure that we're unlocking and enabling that. But that's kinda what keeps me up at night and what keeps me smiling at work.
And I would just add on that, and I mentioned it a lot during this conversation, and it's probably because my background is in heavy industrial, and I lived and died by services. But the potential we have in services, the install base that we built, the technology advantage we have, and continuing to stack, you know, equipment out there that's in service and use, building on that advantage of wrapping your arms around it, and it being yours, and having the entitlement to service it, and having the entitlement for a long-term recurring revenue base, is a differentiator for us, and we've got to take advantage of that. I think that's the space where I'm, you know, I'm laser focused on making sure we execute on that, 'cause that's the future of the company.
Scott, Craig, thanks for joining us here in Miami.