Great. Well, thanks, everyone, for being here. It's my pleasure to have up next, Eaton Corporation. We have Michael Regelski, CTO of the electrical businesses at Eaton. Clearly a lot of focus on data center technologies, the evolution of power and, and thermal management within data centers. That's really going to be our topic, to focus on, today, rather than kind of near-term demand trends, and so forth. So, thanks very much, Michael, for being here, with us. Maybe a first kind of broad-brush demand question on data centers, would just be around, you know, I think Eaton's talked about 100+ gigawatts of cumulative sort of opportunity in, in data center buildouts in the U.S. Where are we on that kind of journey? How do you see the market size, evolving?
Okay. Well, Julian, thank you very much, and it's a pleasure to be here. Back in March of last year, we estimated that the data center build-out in the U.S. would be roughly around 100 GW by 2028. At the end of last year, we kinda estimated that along that progression, there was between 35 and 40 MW or GW of installed data center capacity. We see about 17 GW planned for 2026, and our later estimates show that there's a backlog of over 165-200+ GW that are planned through 2030 and beyond. A lot of the estimates that we had provided in the past, we feel very comfortable and confident about those, and we see continued growth and upside.
You know, when you think about the durability of growth and kind of the recognition of that very large backlog, you know, how many sort of years of visibility are you starting to get because of that backlog in terms of the, the outsized gigawatt installations?
I think if you go with the kind of planned migration and progression that's occurred, you know, we could see this going up to 10 years or so. I mean, there's only physically so much that can be installed. There's capacity constraints, there's labor constraints, et cetera, but all of this looks like it's gonna hold true, and you keep on seeing next generation of chips that keep on coming out that are gonna further AI. So, we think that there's a long, that there's a long tail here.
Perfect. You mentioned the 17 GW I think, planned for this year. You know, how has the sort of the thought around that evolved? You know, any context around maybe kind of how much was put in in the last year or two, just to give us a sense of scale of the industry ramp?
You know, probably not the best on that. I wanna say that in the year before that, there was probably about 11, 11 GW-
Yeah
or so that were installed, and this is going to continually kind of ramp up.
Yeah.
Again, some of the limitations are going to be permits, and there's things that are just beyond our control and, and everything, but,
Mm-hmm
... we think that this is going to stay pretty steady, if not increase.
Within the data center, I think there's, you know, a growing trend to try and get higher voltage current straight into the IT room.
Mm-hmm.
You know, when do you see that, that 800-volt DC coming in? You know, do you see much of a role for the, I think, 400-volt people have talked about as a, as a bridging path to that? Maybe help us understand kind of some of those changes and, and what it means for Eaton.
Yeah. So, 800-volt DC to the rack is probably one of the biggest architectural changes that are starting to be designed into data centers, and a lot of those designs are taking place right now. You know, honestly, when look at Eaton, I think that's one of the untold stories here, is that DC power is probably one of the biggest transformational things that are going to hit the electrical industry since, quite frankly, AC electricity was, you know, was around in the Edison days. But the designs are starting to happen now. And you look at why. Well, we know that there's a shortage of generation, you know, electrical power generation, that's occurring to meet all of the power demands that are out there.
And if you just look at from the utility feed all the way through to the rack, and you step down voltages and going from medium voltage to low voltage, and then down to 54 volts into the, into the rack, and the conversion from AC to DC and back, we estimate that there's roughly about 5% electrical loss during that transition. If you could just go from DC, directly from the utility feed, all the way through the data center into the rack, that's 5% efficiency gain that you could get. So, on 50 GW or 100 GW of power generation that's needed, that's 5 GW of power that all of a sudden just appears from the existing infrastructure. And that is really, that is really exciting. Can people get there all in one shot? No.
As you mentioned, Julian, there's some bridge strategies to get there with 400 volts, but the underlying movement towards direct current, that's pervasive, and that's going to happen. As you start seeing higher and higher rack densities, the demand to move to DC power is going to increase.
Got it. And I think, you know, solid-state transformers is something that we hear about as a product category that will, you know, have a big uptake in that transition. Maybe help us understand why the solid-state transformer important in that transition, and, you know, how well-placed is Eaton there in terms of developing solid-state transformers and starting to get the capacity in place to build them?
Sure. So, when you look at power coming into a data center, if we start there, there's a utility feed, and that's medium voltage, and then it goes through a traditional transformer, gets stepped down to low voltage, and gets distributed throughout the data center, and all that's AC power until it gets to the rack. The medium voltage solid-state transformer takes the utility feed and can convert it directly using power electronics into DC, direct current. That direct current can then flow throughout the data center, through all the switchgear, all the busbar, directly into the rack. So, you're avoiding all the conversions. You're already at DC power, which is what the servers and the chips take in order to provide the electrical power to make them run. So, that's why that's so important.
It eliminates the loss, but it simplifies the whole architecture of the data center. Everything now is direct.
Got it. And, you know, as CTO of Electrical and thinking about Eaton's position in solid-state transformers versus, you know, other companies out there, some of them at this conference, who are developing solid state as well, you know, how comfortable do you feel about Eaton's position there, technology lead? When do you think it might start to come into sort of mass production at Eaton for the solid-state transformers?
You know what? I feel actually really comfortable, and really confident about our solid-state capabilities. Just to kind of put that in context, we started investing in next-generation power electronics about 10 years ago, and we saw a trend that says DC power is going to be prevalent. Why is that? Because most of the loads outside of motors that are out there that require electricity are DC in nature. If you think about your phones, if you think about anything inside of a house, LED lights, they're all DC. So, we thought that that was going to be the case, and we started investing very heavily in that. I mean, today, we probably have the most power-dense UPS because of next-generation power electronics, power electronics that we invested in. So, we saw that coming.
We just didn't know the market segment or the timing and, and what would happen. So, about 3 years ago, we started investing organically in our own medium-voltage solid-state transformer. We actually have pilots that we're running today over in Asia Pacific using that, and then we acquired Resilient Power because of some of their breakthrough technology in medium-voltage transformers as well. So, we feel very good about where the direction is going, the investments that we made, and how it's going to start helping data center customers optimize their energy flow throughout a facility. Now, the timing, we know that we're working with customers today. They're pulling us in to say, "Help us start designing what that data center looks like." We think that when it hits, no one's really sure.
We'll know when the orders come in, but based on pilots and on interest, we think it's going to be sooner rather than later. It's probably in the 2- or 3-year timeframe before it starts getting mass adoption, and a lot of that, honestly, is timed with the compute power that's increasing for the chips as well. So, those will be kind of in coexistence. As you start approaching megawatt racks, you're going to start needing this technology to provide the power into the rack.
Historically, as you said, you know, Eaton's strength is in the lower and medium voltage side of things. As you get more higher voltage activity inside the data center, you know, are there some risks there, or you think because of the investment already undertaken, Eaton can do well in that higher voltage environment?
Yeah, I feel very good about it. And there's... Whenever... It's interesting, different market segments, when they think about high voltage, it means different things to different people. So, in an IT world, when you're looking at a server rack and you're saying 800 volts DC, that's looked as very high voltage. When we think about the voltage that's coming in from the utility, that's 1,500 volts. So, 800 is-
Yeah
... actually pretty, you know, pretty low. So, we feel pretty confident, we feel pretty confident in that. And, you know, there's one other thing that we're really looking at when we're helping customers plan out these architectures, and that's the circuit protection and power distribution side of this. It's one thing to be able to convert it, but you're talking about now a lot of power into a very small area with highly valuable equipment, and the circuit protection world has been designed around alternating current. And so, now you have to come out with a whole new class of protective devices to be able to make sure that you can circumvent fault conditions, and you could provide a safe environment for those assets to sit in.
And we've been investing in solid-state circuit protection and hybrid circuit protection, you know, for several years to try to go and meet this demand. And you may say, "Well, okay, why is that important, a circuit breaker?" And I won't go technical, but I will give a very easy example. When you have alternating current, the current goes up and down below zero, and where zero voltage is, that's where the actual circuit breaker and a mechanical device can interrupt the circuit safely. When you have direct current, the voltage level is always on, so you have to interrupt the circuit at a much faster pace if you're going to provide a safe condition. A lot of people overlook that, saying, "Well, AC, DC, it's..." No, it's totally different, and you have to come up with different schemes, and that's why customers are bringing us into these.
Now, I can't just get somebody with, you know, from IT to go into a rack and pull out a server. There's a lot of power coming in.
Mm-hmm.
I have to be able to do it safely. Do I need to wear protective equipment? How do I go and make sure that that environment is safe for people as well as the assets?
On that point, you know, Eaton's well-positioned, I think, 'cause it has a very broad remit within data center electrical equipment. In the conversations with hyperscaler customers, you know, do you think they're moving more towards that systems approach? You know, historically, it seemed like it was more kind of best of breed, let's say. As you get these technology changes in the data center, you know, do you see the customers evolving towards more of a systems purchasing arrangement?
Yeah. You know what? It's a, it's a really interesting, it's a really interesting dynamic, and all customers traditionally look at individual products, and you have to have open interfaces because people want the ability to choose best-of-breed components that they could fit together. They also want to have, you know, multiple sources of supply as, as well. However, when you're looking for optimal efficiency, and you're trying to go and make sure that all of the components can fit together, especially when the industry isn't established, they're really looking for a systems approach. They're saying: How can you help us design the system from end to end so all the things work together? And then at the boundaries, can we make sure that there's a best-of-breed component in there so that I can have the choice of multiple suppliers?
So, it's an industry dynamic that's, you know, in existence, but we're finding more and more that right now, and we think this is going to be the way, as long as people are maximizing efficiency and value, that they're gonna want a systems play. They're gonna want people to say: How can you help us do all the work so that we don't have to do the engineering ourselves? And, you know, probably the best analogy I'd say is if you think about an Apple ecosystem, you know that everything just kind of works. You don't have to be the engineer on it.
Mm-hmm.
More and more, that's what kind of customers are looking for.
Great. And I think something that, you know, within the data center overall, Eaton historically very strong in the gray space, so-called, and I suppose that has been growing very quickly. There's some question marks around, does white space start to grow faster, maybe, as server technology is changing? So, maybe two questions there. Kind of how do you see gray space versus white space growth rates from here in the data center? And, you know, how do you feel about Eaton's positioning in the white space, not just in the gray space, where it's clearly very strong?
Yeah. It's an interesting evolution. I think historically, if you look at kind of data centers and how they were designed, you had power systems engineers working on everything in the gray space, and you had IT professionals working on everything inside of the white space. As you start getting those rack densities that are gonna be up to 1 MW, there really is no differentiation in the power between the gray space and the white space. It's flowing seamlessly. We're starting to find ourselves getting pulled into these design discussions because everybody's sitting at the table and saying: How does the power flow from end to end, all the way from the utility into the building and then through into the rack?
So, those discussions are evolving, and I think especially as DC power becomes more prevalent, gray space, white space delineation is gonna start becoming more and more artificial when you look at it from a pure power perspective.
Interesting. And on that point on gray space, you know, historically, it was sort of power coming from a centralized source. Now there's more of a push for distributed power generation, kind of at the data center site itself or very close to it. How does that trend affect Eaton? You know, understanding you don't do much in the pure generation side, but there, there's some implication on the electrical equipment anyway.
Actually, you know, it's one of the things that we don't necessarily talk about a lot, but it plays in very well for Eaton. And, you know, we noticed this trend a while ago, is that power flow now is not just unidirectional, it's bidirectional. So, you really have to have that mindset when you're designing your equipment. And there's a couple things, you know, to manage the flow of electricity from the utility feed as well as on-site generation. We have technology like microgrid controllers that help manage how that flow gets optimized for the facility, and we've also put a lot of technology inside of our equipment as well. And give you an example, in some of our high-powered UPSs...
We have this feature called Energy Aware, and what that does is it takes all of the capacity of the battery that's just sitting there waiting for a failure, and it says: How can I take portions of that and allow the data center customer to provide that back to the utility to do frequency regulation so the utility operator doesn't have to spend more CapEx to go and put infrastructure in to manage those small, minute fluctuations in power? So, now that UPS starts becoming grid interactive and provides a benefit back to utility, so the data center operator and utility can maximize the efficiency. So, we see those trends happening, and actually, that's pretty, that's pretty exciting for us.
Great. If we dial into maybe, you know, an emerging part of kind of white space has been liquid cooling for 12-18 months now. Maybe help us understand kind of how you see the cooling loop today, how will that evolve and liquid cooling technology as you get different types of high-powered chips emerging?
That's a great question, Julian. You know, it really starts with the chip. When you look at chips that are, you know, 1,000 W of power, and you have racks that are, you know, 100 kW, 200 kW, up to, you know, up to 1 MW, you have a lot of heat that's being dissipated from all that power in a very, very tight area. You know, we see, you know, one of the things that's used to dissipate that heat is the cold plate. And why the cold plate? Well, inside of these tight cabinets, you don't have enough room to move airflow, so, you have to do it through some other means. And cold plate technology is advancing very, very rapidly. It's actually a very high-precision thermal management device.
There's different materials that are going in to minimize the resistance between the silicon chip and the cold plate and extract as much thermal you know heat as possible. There's different microchannel architectures that are being designed into the into the cold plate that are optimized specifically for the thermal profile of that chip. So, it looks like something that's like okay it's just a bunch of grooves and you're moving water through it. But it's actually a highly highly precise piece of thermal management equipment optimized specifically for that chip. And you mentioned before the systems play. So, you can design cold plates today and in very low thermal environments.
Okay, they can work kind of as, as a standalone component, but the more and more you get into these, these high capacity, high thermal devices, optimization with the CDU to be able to move fluid through at the right pace, extract it, take it out, and put it through the heat exchanger, that's starting to become more and more of a systems play. You still have to design them independently, but it's becoming more and more optimized to be a system play.
When you think about the customers and the kind of route to market for some of the cold plate product, it seems like there's a lot of competitors out there. Maybe... I don't know, how does the customer buying behavior evolve? You know, how confident do you feel in, say, Boyd Thermal's competitive position in that landscape?
Yeah. You know, we found a few things that from talking with customers and what they value, and this is where Boyd Thermal gives us, we think, a really, a really nice synergy with a lot of our power equipment. So, the first thing is that reliability is key, and Boyd Thermal, really, in their cooling technology, came out of the aerospace industry.
Mm-hmm.
So, aerospace is probably one of the most rigorous environments out there, and it's really simple, right? If something fails, people can die. So, highly, highly high reliability is there. Now, you have these data center apps that have to be protected, so reliability is, is key. The second is being able to work at a rapid pace with the chip manufacturers. So, you have chips that are coming out every 12-18 months, and one of the things that I think it's underappreciated is the amount of modeling and simulation capabilities that somebody like Boyd has, and that you have to have to work with the silicon providers.
Being able to go and create a model of what a cold plate would look like, and then tailor that to a model of the thermal profile of the chip, and rapidly iterate through simulations that say: How will this go and extract heat? That's really critical in order to rapidly go and produce something. And then being able to take that, go from, okay, this works virtually, to a rapid prototype and then high-scale precision manufacturing. Those are all characteristics that we find that the customers are looking for, 'cause they don't want these things to fail, and those are some of the attributes that really made us, made Boyd a very attractive acquisition for us.
I think in the cold plate, people sometimes worry about, you know, say, how will Boyd cope in that transition to two-phase liquid cooling? How do you sort of see that technology transition playing out for it? You know, you think it can maintain very high market share even with that shift?
Well, so we think that even in single-phase cooling that there's still a lot of runway to grow, and there's a lot that can be extracted out of the technology that exists today. Just if you look at some of the announcements that, you know, were made, where, NVIDIA said that, "You know what? Hey, we can use, you know, warm water cooling instead of cold water cooling." A lot of that's just due to the advances in cold plate technology and being able to reduce the thermal barrier with the chip and not have to use really chilled water to go and extract heat. And as we move into, you know, two-phase cooling, Boyd has been doing research in there as well.
It optimizes the CDU and the heat exchanger for that, but the cold plate is still front and center to the connectivity point with the chip. So, we see that as a lot of runway that exists in the current technology path, but a gradual migration as warranted into advanced cooling technologies.
Great. And then maybe some of the more kind of specific questions on sort of customers and competition for Boyd. I don't know how much you can talk about this, but, you know, I think there's a, there's sort of market perception out there that NVIDIA and Google are a couple of Boyd's, biggest customers. You know, any sense of kind of the, the weighting amongst them? And I think there's also been some stories around Boyd, maybe not, you know, maybe losing some share on the latest version of Google, TPU version 7, to an Asian competitor. So, I guess anything you could say on that, and more generally, and it's part of the same theme, trust me, but just there's a lot of Asian tech hardware companies in that cold plate-
Yeah
Arena. You know, it... they're a different mindset, used to really rapid product cycles.
Mm-hmm.
So, you know, why are you confident that Boyd can cope with that type of environment?
Well, I could give, I could give some generalizations. As you said, we haven't closed on-
Yeah
... on Boyd yet. But, you know, Boyd is in the discussions with all of the chip silicon providers, and they're using the modeling and simulation capabilities and going designing cold plates to go and meet their chips. So, that's happening. You know, one thing you could see, and this is really evident from OCP discussions, is that there's nobody that's going to lock themselves into any single vendor. So, OCP is out there to say: How do you go and co-develop with customers, but then move towards standards so that you can have choice and best breed of components? So, that's really always going to be the case. The thing that we're finding is, though, the customers are valuing what is your background? How reliable are your products?
Can you work at the speed and then scale the manufacturing to go and meet our needs? And that's where we see Boyd as really being advantageous. That modeling and simulation capability really is key because that's how things are designed today in the world. That's how accelerated product life cycles are coming to bear. And, you know, there are gonna be a lot of competitors that are coming out there. I think the question that we're finding that's being asked is: What's your pedigree and reliability, and reliability? What's your track record? What's your product failures that are known? And is this a trusted supplier that we could count on to give us the speed that we need, the reliability that we need, and the efficiency that we need? And from everything that we could see, we think that Boyd carries a lot of those capabilities.
Fantastic. Well, with that, we'll switch quickly to audience response questions. The first question is just: You currently own shares in Eaton. A decent balance there. The second question is around general attitude to Eaton right now, regardless of ownership. Generally, a very positive approach. Third question is around EPS growth for Eaton versus the peer set here, which is broad U.S. multi-industry. Very high growth profile. Next question is on uses of excess cash. Eaton's clearly been pretty busy on that front, already. A mix of smaller M&A and organic investment. Penultimate question is on valuation, I think, and what year 1 PE should Eaton trade at? Sort of low 20s.
And the last question is, you know, any anchors on the multiple, or reasons that people don't own Eaton shares? So, a bit of a mishmash, sort of operational execution and margins, I suppose, which are fairly common route, and we can see that in the last kind of 6 or 9 months. So, with that, Michael, that was a real pleasure. Thank you so much for a very illuminating discussion.
Okay, Julian, thank you very much.
Thanks a lot.