Great. Well, thanks, everyone for being here for the second session this morning. It's my pleasure to have Vertiv. We've got sort of two sides of it today to go through. Craig Chamberlin, CFO, and Scott Armul, CTO. So both sort of recent appo intments. So, you know, congratulations both of you, Craig, and Scott. Maybe we'll start off around a sort of, you know, more financial question and then go into more of the technology stuff. I suppose one thing that's exercised a lot of investor questions is around margins and, you know, how do these companies with a lot of data center exposure, like Vertiv, manage very high production growth, capacity additions, and still kind of generate decent operating margins, and high incrementals and so forth?
So maybe just start there, the confidence around that. It was a good end or second half in the Americas for Vertiv last year, but kind of how do we see the year ahead on incrementals?
Yeah, I mean, I think we talked about it a little bit in the earnings call. We're still guiding here to 28 in the Q1, ending up 29 for the year, with that path of, you know, the expected incrementals being somewhere in that low thirties, you know, 30-35 long term. And you asked the question of how do we get there, or what do we drive? I think it's a couple things. One, it's a price-cost equation. You know, we drive price to offset our cost inputs to make sure we stay margin neutral, and to gain price where we can, especially on the back of technology and on the back of offerings that we really think are stand out in the environment and in the space.
So I think that's a, it's a big driver for us. I would also say we get a lot of operating leverage in terms of how we use our footprint and efficiently use it. You'd mentioned or asked a little bit on bringing on capacity. I would say this year we are feeling a little bit of inefficiencies by bringing on some brownfield capacity and greenfield capacity. You'll see it a little bit in the Q1. That's why it's kind of the low point. We're bringing a greenfield on in Asia and some brownfield expansions in America. But as those sites get up to full capacity and full run rate, you would see that pull through. So that's what brings the incrementals up as we exit the year. So I mean, I think, again, we go back to productivity in the shops.
That's a major thing for us. Ensuring that our price-cost equation is positive, and then pricing and pricing for technology and what I'd call differentiated advantage in the market. So those things are where we feel really strong about. We feel like that would be our margin play going forward, so that's kind of where I'd lean into.
Great. And, you know, when you're thinking about the, I suppose, working capital side of things-
You know, it was interesting to... that you had those, the deferred revenue and a big source of cash coming from that, as the orders picked up last year. You know, is that something that's become a lot more relevant as the size of project and order wins has got-
Yeah
... larger? Because I guess historically, it wasn't a feature of the landscape for electrical equipment.
It was, and I think as you get more to what I'd call project-based or system-based offerings, where you're offering a larger scale and scope, we want to make sure that we drive towards cash neutral to cash positive across the life cycle of that project. So if we start ordering and we know we're going to deliver over 12-18 months, how do we stay in front of that cash curve so that we're not, you know, funding the project, basically? And that's what we look at for a milestone progression and understanding how that works. I think it's something the market has accepted, especially given the fact that a lot of the spaces people are wanting to move as fast as they possibly can to get equipment in.
So for us, it, it helps us assure our balance sheet, it helps us assure our cash position. It also helps us assure, you know, that, that, the deal is a deal that's going to go forward, and that we feel very positive about the, the financial standing of that deal. So it, it kind of solves three problems for us, and we really like the position that it's gotten us into. And I think what you saw in the Q4 was because of the, the order ramp, and we saw a lot of orders close there. It did pop our cash up, so it was a little bit above what we would normally run in terms of free cash flow conversion or, or cash flow conversion, and we'll probably see that settle back down a little bit next year.
But we still feel like we're positive working capital, and a lot of that is driven from strong management of inventory, strong management of payables, but you do get an uplift as you see the order intake and what we call advanced payments or milestone payments.
Great. And then on sort of orders and backlog, I know Lynne had asked me to ask you a lot of questions about orders, guidance, and so on, but if I stay away from that for a second, the backlog composition, you know, it seems like a lot of longer-dated orders have been placed by data center customers. But at the same time, the sort of profile of your backlog, I think-
Yeah
... you described as being similar. So-
I would-
Help us understand how to think about those two things.
Think of... and this is the way that I think about it, and I think Gio described it pretty, pretty normal. If you think of the backlog as like a square, and how you fill it in is, like, the front you fill in is the earliest orders, and it goes down. What we're starting to see is the back half of that filling up more. So the back half of the 12-18 months filling up more in terms of orders that we're getting to elongate that, what we call fill up of the backlog. So when we say the orders that we saw in the Q4, especially in December, are filling in that back half of the backlog, that's what we're really, truly seeing it progress towards. So historically, you've seen a run rate across the quarter that was much more flat.
This year, in 2025, we saw that spike in the Q4. Now, now, Julian, that could have happened, those orders in December could have moved to January, and we wouldn't be having this conversation because it would've looked normal. But we were glad that they closed in December, and that is where we say it, it references that back half of that backlog tail. It doesn't change the shape of it, it just fills it in in a different way.
Got it. And so the sort of execution profile of it is not that different?
It's not that different. Still, we always say between 12-18 months is where we feel like our backlog's gonna be executed through. I would say historically, what you've seen is we fill up the front, and then it starts to fill up the back. I think this time in December, kind of filled up that back end a little bit, helped us sure up that whole backlog profile, and that's where you see kind of the dance between if you get an order in December, is it delivery in December of 2026 or into 2027? And that's some of the dynamics you saw with the math doesn't work exactly right because of the phasing.
And this is something I think for both Craig and also Scott, this question around, you know, the extent of that order flow, I think, surprised many people versus other electricals, and realize your orders and theirs are also very lumpy. But is there a shift in customer behavior around looking to buy more from a systems supplier? Because I think when people think of hyperscaler customers and the larger colos, very sophisticated, so they can go best of breed. They don't necessarily need the external party to kind of tell them what to do, but it seemed like maybe there was some shift towards a systems purchasing approach. Are you seeing that, or not necessarily, it's just project to project can differ?
We'll... I can start, and I can pass it on to Scott. I think there's a couple things that you're seeing. One, the systems-level approach is there, and it's there because of reference designs, it's there because of the, I would say, the partnership. Not necessarily that we're going and telling somebody what to do, but we're going in and listening and understanding what they need and helping them develop the solution that's best or the system that's best to service the duty cycle of what they're trying to solve. And we think of the system-level thinking as stacking, right? You buy a CDU, then what can we stack on that in the thermal chain that would then continue to add value to the broader portfolio? The same thing on the powertrain side. How do you continue to stack out from there?
When you're in there having a conversation about what the duty cycle of that data center is gonna be or what the, you know, the, the ultimate solution they're trying to provide, it's easy to have conversations about how to add to that architecture. If you're partnering with them, and you add to the architecture the right way, you kind of get the reference design of your equipment that would be able to service that need. You know, we're seeing a little bit of that. That's point one. Point two, again, and we kind of highlighted this, is some of the prefabricated solutions is solving a, an issue that we see in the market today, being just the, the lack of available, what I call labor force in the, in the, in the industry.
When you can do some prefabricated work, either it be, you know, plumbing or electrical or whatever, in the shop, in a controlled environment, and you provide a, you know, a SmartRow or a OneCore where you have some of that already done, it takes a little bit of a, a pressure off of the, the actual builder. So I'm gonna pass it on to Scott. I'm sure you have some thoughts.
Yeah, and I think related to how that dovetails into orders and the composition of the backlog, I think I wouldn't describe it as a material change, but we are seeing maybe a lot of the new entrants into the space, the neoclouds , if you will. A lot of the folks that maybe have land and power and are starting to build data centers, are getting into that build cycle now, where we've been through planning, we've been through some of the engineering design together, and I think that pairs very well with exactly what Craig just talked about.
Maybe some customers and customer profiles that don't have huge, robust development teams or huge, robust engineering teams that would lean on a partner like Vertiv on reference designs and architectures and how the system fits together, and then you dovetail that very quickly into prefabricated solutions. A lot of companies, obviously, are trying to go fast and move kind of faster time to market, faster token to or time to token and token to revenue, and things of that nature. Prefabricated solutions that help kind of define the data center for AI as more of a unit of compute, as more of a purpose-built design specifically for that workload, that can then pivot very quickly into prefab solutions that are more factory-oriented, more factory-built, quality controlled, and then deployed on site so that we can eliminate some of the labor hours on site.
You can kind of smooth out some of the overlapping contractors, mechanicals, trades, all working at the same time. I think the Neo-clouds , as well as even some of the mature hyperscalers that, that see that, are all pivoting toward this concept of maybe further embracing prefabricated and more purpose-built solutions in order to go faster and scale at the level that, that we're talking about here.
Great. And, you know, when you're thinking beyond the very near term, I know this one more for Scott, perhaps. When you look at overall kind of data center equipment, where Vertiv plays and the data center physical infrastructure, across the sort of full suite of, of products, whether power or thermal management, which ones do you think offer medium-term, sort of higher versus lower end of the growth spectrum?
Yeah, it's interesting. I'm not sure I could pick a particular product category because we're seeing a huge influx of just demand for greenfield data centers-
... especially as we're still kind of in the build-out of training models. We're seeing kind of a real push behind human usage of AI and inference deployments and things of that nature. We're looking at turnkey data centers and end-to-end data centers, so it's heavy interest in powertrain. Obviously, the thermal part of it, and especially kind of maybe the growth of liquid cooling, is the answer that probably a lot of folks are looking for. But we're seeing a heavy interest in that and heavy demand across the board.... Typically when you get into data center planning, you kind of start with power, and you start with power architecture, and that very quickly kind of moves into how do all of the block sizes align?
What's the right structure for me to set up a pod or scale to a larger building block of a data center? From our perspective, we really see that design happening holistically. Generally speaking, like, you can still drive towards best of breed and piece parts and individual product solutions, but having a thought process behind, this is going to be a 100-megawatt building that scales to a 400-megawatt campus, or 200-megawatt buildings that scale to gigawatt campuses, it's very hard to do that piecemeal and individually.
Understanding how those blocks come together and how that entire design works for kind of GPU and TPU generations today, and how that needs to evolve over the next couple of generations that are going to be coming in very short order, becomes a very interesting engineering exercise and joint development exercise to make sure that we're planning in and driving towards flexibility. So that's where we, we start to see... It becomes more about kind of planning for that capacity and how this data center will evolve, and less about maybe individual product lead times and, and individual product growth rates. So maybe a, maybe a cheap answer, but we do kind of see it across the board, and it probably, I think, effectively goes back to our last question.
We see a tremendous amount of interest and uptick in growth for Vertiv in the prefabricated and more of the infrastructure solutions. So prefabricated white space, like our SmartRow product, prefabricated building blocks for power modules, service corridors, hydro modules, liquid cooling approach that can be scaled into a holistic data center, like our OneCore solution. Saw tremendous growth, and I think that made up a significant portion of our order profile, as Gio talked about on the earnings call, and we see, we see that as a kind of a more of a secular trend here for data centers on an approach to build.
On that point, you know, market commentators sometimes bifurcate things between sort of gray space versus white space. Do you see any difference in kind of relative growth rates between the two for Vertiv or the industry from here? And is Vertiv kind of leaning into one more than the other right now, or it's pretty balanced?
Yeah, it's, it's interesting. We're, we're across the board in kind of the outdoor environment, the gray space and, and the white space now. I think traditionally, folks would have thought of Vertiv as the kind of the heavy infrastructure, more of the gray space-oriented, but we keep coming back to, to our SmartRow solution.
Yeah.
When we think of prefabricated infrastructure that can help stand up and turn over a white space much faster, that SmartRow is hot aisle containment, structure for racks, busbar and power distribution. It's the secondary fluid network, all of the intelligence and control around it that we can move into a data center, put in place. It's factory built, it's a single lift, and it, it significantly accelerates how fast a, a customer can traditionally stand up white space. And then it also enables us to have more of a view of, like I was talking about before, what are the generational changes, and how do those blocks that stand up a GB300 today pivot towards supporting a Vera Rubin or, or an accelerated TPU architecture 18 months from now or 24 months from now? So we have a lot more of a presence in-
... in the white space, than we maybe have in the past or maybe would have been considered to have in the past. And then you add to that, some of the capability we've added and bolstered in the recent months. Our acquisition of PurgeRite and services in the white space is becoming much more relevant, in that standing up a liquid cooling network or a secondary fluid network is not a simple task. It's not for the faint of heart. It requires experience. It requires an understanding of kind of how all of these things are supposed to work together, and then there's added content of flushing, filling, setting up, doing fluid management, and making sure that kind of the network is ready to go and ready to have racks deployed.
There's just a lot more content and focus from a critical infrastructure perspective in the white space. The gray space is just as critical, and I think we're starting to see much more, maybe thoughtfulness around block size, interconnectiveness, how medium voltage and low voltage switchgear interfaces to UPSs, how we need to be thoughtful around kind of the mechanical yard and heat rejection, especially as data center sites are growing in massive scale. Those become maybe more different or interesting problems to solve.
The fun part for us at Vertiv is we're looking at each of those pieces and how they're interconnected, and trying to say: How can we do this faster, simpler, easier deployment, bigger scale, more effective cost, so that we can manage kind of the capacity, the scale, and the speed that's been coming at us?
I suppose one trend that's been talked about a lot is higher voltages.
I don't want to use the word high, but just let's say higher for now-
Thank you for that.
Into the IT room. You know, how does... You know, how is Vertiv positioned for that? You know, there's a lot of questions around what types of power infrastructure may be less important versus some emerging ones, like solid-state transformer could be more important. How is Vertiv positioned for that?
Yeah, I think we're right in the center of this power architecture evolution, and to a great extent, we're happy and pleased to be able to kind of champion some of the transitions that are going to need to happen here to enable the high-density architectures and the super high-density chip approaches that are going to be coming at us. From a DC power specific perspective, I think that's where a lot of the headlines-
Yeah
and a lot of the discussion points have come around this concept of ±400-volt DC or 800-volt DC. We've been fairly public about our support and our investment in that area to enable kind of the Kyber racks and some of the things beyond Rubin Ultra and Feynman chips that NVIDIA has declared. Timing is still maybe a little bit up in the air and iterative, but I think there's an inherent physics challenge that is going to be coming at us, and that's the reason behind maybe the pivot to higher voltage DC architectures.
We physically run out of space for a busbar, for rack power distribution, to be able to effectively move the amount of power into 600-kilowatt, 800-kilowatt, megawatt-type racks and multi-rack structures in a pod. Moving to a higher voltage DC architecture helps solve the power distribution problem, it helps with heat management, and it gives us an overall more balanced structure in moving to DC on how we manage energy storage, how we manage kind of energy sources, voltage ride-through, and some of the other maybe more existential data center challenges that come when you have extremely volatile and extremely dynamic workloads operating at scale.
It's one of those things that helps enable the architectures of the future, in a way that doesn't require maybe a binary shift or a significant overhaul to the way in which we've done things. So from a Vertiv perspective, we're developing and we'll be launching a pretty comprehensive 800-volt DC portfolio that starts with-
... what we'll call a sidecar type of a system-
That allows us to take traditional 480-volt AC infrastructure, come to a device or a power conversion piece of equipment at the end of a row to convert to 800-volt DC that allows us some flexibility in generations. If we have traditional AC architecture, we can still run that in. If we have 800-volt DC workloads, we can convert to that, and we can support it, and we can kind of manage that, as the sites continue to mature and as the sites continue to evolve, we can work towards maybe more upstream, some of the bigger picture and more purpose-built types of power solutions, like integration of UPS functionality within medium voltage, the concept of large power converters or solid-state transformers.
It allows us to explore those technologies and really drive that path forward without having to make a, I'll call it a binary commitment to one architecture over another.
That's great. And then away from the power management side, maybe switching to thermal management for a second. A lot of discussion around kind of liquid cooling, you know, how much will that outgrow traditional thermal management products like air handling units or chillers. What kind of Vertiv's perspective on that? And I don't know if you would hazard a view as to kind of how much of your thermal management business could be liquid cooling in a few years' time.
Yeah, mix-wise and percentage-wise, I won't venture into that territory, but obviously, the momentum behind liquid cooling and all of the new generations of chips are kind of firmly in the wheelhouse of single-phase, direct-to-chip liquid cooling. So that becomes almost the standard part of any good thermal chain or system design. I think the important thing to recognize and remember is, if liquid cooling is deployed, the heat still has to manage its way through the data center, and it ultimately has to be either heat rejected or reused at the facility site. So, chillers, heat rejection devices, heat exchangers in general, dry coolers, like an entire portfolio and a comprehensive portfolio of heat rejection technology is required to get the heat out of liquid cooling.
We actually see a pretty significant, say, evolution and maturation in much the same way we're talking about power architectures evolving.
From a physics-based perspective with heat rejection, I know there's been a lot of discussion and maybe a lot of follow-up from some of NVIDIA's comments at CES around, hey, the concept that we don't need a chiller any longer. We still need heat rejection, and I think 45 degrees Celsius water delivered to GPUs and chips in the data center is a great ambition. And the reason behind that is, okay, if we can deliver warmer water to GPUs, we can heat reject without mechanical cooling. We can potentially lower the peak power draw of an entire data center site and move more of that peak power towards deploying more GPUs or increasing the cluster size. It's a great ambition, and we stand behind that ambition as well.
But the practical reality of most data center locations and most data center environments is that heat rejecting for 45 degrees Celsius water is a pretty significant challenge. So one of the things that we always like to talk about is a comprehensive heat rejection portfolio that is likely deployed in more of a hybrid scenario, where you're going to have chillers and dry coolers, or leverage a product like we have in the portfolio that we call our Trim Cooler. That sort of gives you the best of both worlds. A very large, dry cooler to manage free cooling and to reduce your peak power and increase your capacity, while you have kind of the backup benefit of mechanical cooling and traditional chilling to use if and when you need it, on hot days in warmer environments.
is the type of thought process we like to think about in terms of the overall heat rejection portfolio because it has to go in lockstep with liquid cooling, and it will continue to be relevant.
I think one topic that is kind of top of mind for a lot of people is around the competitive landscape in liquid cooling. You know, your traditional chiller business, there's half a dozen companies who are experts at it. A product in liquid cooling like CDU, maybe there's, you know, a hundred expert companies in theory. So how confident are you in Vertiv's kind of technology edge or competitive position in liquid cooling versus, say, chillers? And allied to that, maybe a broader question for Craig around kind of pricing across Vertiv. You know, you and a lot of competitors are adding masses of capacity. Demand is growing, sure, but the supply is growing at speed as well. So do you start to worry at some point around? Is there any signs of price degradation in certain product categories?
Yeah, I'll start and-
Yeah
... hand it to you.
Absolutely.
From a competitive perspective, especially within liquid cooling, we feel very good about our position, and I think now, having a few years of experience at scale has really informed a couple of things. One is, we've talked extensively about the system view-
Yeah.
And the system perspective. So not only just the CDU box, where it's critical to manage flow and pumps and heat exchanger rates and approach temperature, and, like, the core technology of a CDU does matter. But yes, you're right, there's, there's probably a lot of focus here. There's a lot of new entrants. There's a lot of capability that's coming into the marketplace. But how that CDU connects downstream to the secondary fluid network, and how you're managing the intelligence and the control around that, how it connects upstream to chillers and heat rejection, and how we're managing both of those things together.
And then maybe most significantly, and most importantly, when you go to deploy CDUs to scale and to actually deploy at a large site, we've now seen this firsthand, and we've run through quite a bit of site deployments now, where I'll, I'll just say it's not for the faint of heart. And turning up that quantity and doing flush and fill, and managing the liquid cooling turn up for sites of that scale and of that magnitude is very significant. And there's a lot of maybe customer leaning on expertise and customers leaning on service content and other things that makes that sale and makes that project much more than simply a supplier of a CDU. So we feel very confident that our customers see the value in more of a system approach to the design-
-as well as kind of the scaling approach and maybe the boots and expertise on the ground that's required to stand up liquid cooling at scale and at speed. So, I think we'll continue to leverage those areas as our value and our differentiation in that story, and I think that's led to-
Yeah.
a fairly robust price position and capability.
I would say, again, I look at it in two ways, Julian. One is exactly what Scott said, which is what you're seeing kind of unfold in Americas, is a lot of this system-level thinking, especially with the deployment of AI data centers. And that is where the system level, I think, helps us hold our price, helps us get premiums there, because the customers really see value, and not just the technology that we provide, but the partnership and the solution-based outcomes that we are driving there. So there's a little bit more of what I would want to call opportunity there to hold price or to get technology, get price for technology.
If you start thinking of places in the Asia space, where there's more competition on some of these levels, and maybe it's not liquid cooling, but it's other spaces, you might see some pricing pressure there. And what we try to do there is be selective as we can to how we win and build those partnerships so that we can continue to drive that pricing for technology. So, I think we feel really good about our pricing position. We've been able to offset our input costs as they've come in through pricing. We've been able to gain some margin there.
So, I feel like right now, the strategy around a system-level deployment and technology evolution to be ahead of kind of where our peers are, and the proven track record of our solutions being something that you go to, or our systems being something that you go to, I think, is allowing us to have a pretty good position there.
Great. And then lastly, maybe, you know, a focus from Scott around organic investments, but there's an inorganic element, and there was obviously a large transaction, you know, announced in November for a liquid cooling asset. So just wondered, sort of what's the appetite at Vertiv for a M&A deal of size in the data center realm?
Yeah.
And any sort of-
I mean-
financial, you know, gating factors or framework we should bear in mind?
Framework remains consistent and with what we've looked at before, and the framework would be, you know, first and foremost, we look at investments in ourselves, which is R&D. If we can build it ourselves, we like building ourselves, great returns there. And we also like to invest in CapEx because we know that that gives us a delivery and a faster turnaround. Then in the M&A space, it's very strategic add-ons that we really like to look at. Can we fill holes in the portfolio? Can we get faster to market? Can we go to a space or a geography that we're not in and get a foothold and build out from there? And that's kind of the game plan and the strategic look that we give, is understanding, you know, where are the spaces that maybe the portfolio could enhance?
To this—to what Scott was saying, like, as you get into the white space, are there spots there that we may or may not have, you know, technology that we want to be able to stack on? As I keep calling—talking about stacking onto the thermal chain and onto the powertrain, are there spots there that we could be advantageous on? And we like the ones where, you know, we can take them and build them and grow them and scale them because I want to be able to pay for something that I put value into, not, you know, pay a multiple on something that may be cannibalistic to what I already have. So it's a strategic look in that framework. And I don't know, Scott, if you-
Yeah, and maybe just from the technology lens, we're looking at it as... We feel very comfortable with our portfolio and our positioning, but obviously, everything we just talked about in terms of the evolution of architectures and where things are going-
Looking at those technology paths forward and where we can use that to either bolster parts of the portfolio or, in many instances, accelerate or complement our organic plans that are already in place, is really kind of the focus area for us.
Fantastic. Well, now we'll switch to audience response questions, please. So the first question is around sort of current ownership of Vertiv. So still a lot of room for persuasion. Secondly, is around kind of current general attitudes to the stock. So very positive on the whole. Third is around kind of through cycle EPS growth. I think we can probably have an educated guess here, but let's see. So yes, above peers. Fourth is around uses of excess cash, and we were just talking about M&A. So generally, bolt-ons. Next question is on valuation and what the warranted year one PE should be. So in the 20s. And last question is sort of any gating factors, you know, why there's still 60% who don't own it? Great, so execution on the product ramp.
With that, thanks so much, Scott, and also Craig, for being here.
Thank you, Julian.
Thank you.
Thank you.
Thanks a lot.