Welcome everyone. You know, glad you could all be socially distant. You know, this is a long day for us. We've just met with a bunch of different investors, but glad we can meet with all of you here. We'll give a quick overview of the company. For those of you who listened to our Q4 call, or sorry, Q1 call yesterday, there's not gonna be a tremendous amount of new information. If you would like to, you know, just dive right into questions, we can also get into questions a little bit earlier. Just let me know. Okay. Diving right in, my name is Ben Gagnon. I am the CEO of Keel Infrastructure. Been with the company for seven years at this point, and I'm leading our transition into the US and into HPC and AI.
If you look forward, there are some statements. We have just completed our rebrand and our redomiciliation to the United States. We are the fifth company in, I think, 15 years to redomicile from Canada to the US There's a lot that goes into that, but we managed to get it done on time. We, you know, completed our rebrand and our pivot away from Latin America. Previously, we were very focused on a very international portfolio of megawatts. That's completely changed. We are now very focused on the United States. About 90-plus% of our growth pipeline is in the United States, with about 2 GW in Pennsylvania, where we have the vast majority of our infrastructure. Our focus is really just on integrating power, land, and connectivity to enable the long-term growth of our customers.
The name Keel really comes from how we want to position ourselves in this new business. We are not trying to compete with Amazon or CoreWeave or Anthropic or anyone else. We are simply looking to help them grow their own businesses and deploy their own compute faster, with greater certainty, in the markets that matter most for inference. We are about halfway through our 3-year strategic conversion. Everything that we have said that we would execute against in 2025 with acquiring Pennsylvania assets, closing out on the Stronghold transaction, our US GAAP conversion, our LATAM exit, we have executed across all of those things. You know, same thing for so far in 2026. Everything is on track and on schedule on our 3-year transformation plan. Throughout the rest of the year, we have two big focuses.
The first is gonna continue development across our pipeline with Panther Creek, Sharon, and Moses Lake, moving those forward through permitting, lease execution, and beginning construction. The other one is gonna be working on expanding our secured capacity. We have a lot of energy capacity that's in the midst of an application, a review, an engineering process, and it's confirming that secured capacity. 2027 will be the year of delivery. We're not gonna turn on any HPC facilities in 2026. We're not gonna have any HPC revenues in 2026. 2027 is the year where the first facilities will come online, the first HPC revenues will begin. In 2028 and beyond, we'll just be focusing on scaling the business from there.
Looking at our portfolio high level, you know, we've got a tremendous amount of secured capacity, roughly the 2.2 GW pipeline. You know, we're just focused on executing across all these different areas. I think the big ones to really dive into here are the first three sites, but we do list the five sites that we're focusing on building HPC and AI campuses. The reason why we're focusing on the first three, and we'll spend most of our time going through the first three sites, is because they're the most advanced, they're the most far forward with permitting, architecture, engineering plans, and customer conversations. The other two sites, Sherbrooke and Scrubgrass, are also very exciting, but they are further behind and will take more effort and more time to execute against than the other three.
Starting with Panther Creek, this is our flagship hyperscale campus. This is a very, very unique location. This is about two to three hours outside of New York and Philadelphia. It's got 350 MW of grid-secured capacity, and we have the ability to potentially expand that beyond the 350 secured to upwards of 500 MW. I think people often forget how much location matters in this game because the capacity constraints around power have been so severe. For the multiple decades that the data center industry has been present and growing at a pretty predictable CAGR, location has mattered, and location has mattered more than almost any other factor. We don't believe that location, the value of a location, has changed or diminished. We believe that, you know, that's always been there.
It's just been the bottleneck on power has forced people to look beyond location, but location is still a very, very valuable selling point when you're in customer conversations. We've cleared zoning at this site. We're working through land development environmental. We should expect the last permits to be cleared by mid to late summer timeframe. We are active in the commercialization for this site right now. The most likely customer for this site, given its scale, is going to be a hyperscaler or a large neocloud. Really, the scale of the site is what is the most determining factor for who's going to have the capacity and the appetite for a site.
With 350 megawatts and the ability to scale up to 500 MW or more, really there's about 10 names in the world who could take on a site like this at this kind of capacity. It's really hyperscalers and the largest, most established neoclouds, and that's basically the universe of potential counterparties who could take on a site like this. We should have continued updates on this throughout the year, but this is going to be a very, very exciting campus to lease and to announce to the market when it's ready. The next site is Sharon. Sharon is 110 MW on the western side of the state, also fully secured. This is likely gonna be our first site fully online in Pennsylvania, given it's significantly smaller than Panther Creek.
It's about a third of the size at 110 versus the 350 or one-fifth of the size if you're looking at the full potential 500 at Panther Creek. This is a really exciting site for hyperscalers as well as a lot of the neoclouds. At 100 MW, the availability of customers and the kind of customer profile who can take on a site of this scale increases dramatically. A very interesting side note is that over the last six months, we have started to receive interest from financial institutions who are managing, you know, hundreds of billions of AUM, and they're using Agentic AI data centers to better manage their cash, better understand and evaluate data and doing a lot more Agentic and algorithmic management of cash.
They are looking to scale up with sites that they fully control, and those enterprises are interested in sites as big as 100 MW or so. That's a new dynamic. I wouldn't have been able to say that six months ago, but this is kind of the emerging enterprise application of AI, which is where all the value we believe is gonna be created, is starting to materialize first, I think, with these financial institutions. Last of the three sites is Moses Lake. This is the smallest of the three. It's 18 MW located in central Washington. This site is likely going to be the first site completely online and generating revenue. Should be doing that, you know, 2nd quarter possibly of 2027.
This is really geared towards emerging neoclouds, enterprise and government, where the smaller site is more applicable to those different kinda customer profiles. Continue forward. I think the big question for customers or for investors is really why Keel over anyone else? The reason why a potential tenant would be interested in Keel Infrastructure over some other counterparty is because we're solving some very high-value problems for the potential tenant. The first potential issue that we're solving is just timeline for power. 2027 power is very difficult to come by now. Most of it's already been contracted or leased out. There are very few remaining options for growth in 2027, today, May 2026. That is providing a tremendous amount of value. Second would be the locations.
We don't have a single site in Texas. I think our southernmost site is in Pennsylvania, around 40 ° north. Our sites are much cooler climates. They're much closer to major metropolitan areas, much closer to where the inference demand is gonna come from. I think what you've seen in the market to date is you've seen a lot of deals for training where the location is way more agnostic. Companies do not make money on training large language models. They only spend money on training large language models. They make money through the inference of those large language models, and that is gonna be where more and more of the market is shifting over to over the coming years.
We believe that the locations will continue to matter more and more as inference becomes a larger and larger piece of the market and companies are really focusing on using the models as opposed to training the models. Third thing would be stakeholder relations. I'm sure you've seen a lot of headlines out there around community pushback at data center projects around the United States. We have developed a very good set of skills over the last seven years of building out data centers in rural communities around the US, Canada, and Latin America. The result of that is we actually have become very good at moving into a community, identifying the community leaders, developing relationships, learning about the community, understanding their pain points and what are the centers of influence, and really focusing on delivering better outcomes.
We do that proactively for every data center and construction project we run, and that creates a tremendously positive benefit to not only the community, but also to, you know, our shareholders because we're able to move forward with our projects with a community that is supportive and wants to see us succeed and is mutually aligned with our success. A big part of what we believe in is building in a place where people want you to build. We're not trying to force something through and trying to really face more opposition than we need. We want to go to the place where our investments are well-received and the benefits are having a mutually beneficial impact across not only us, but all the stakeholders in the area.
We do things like partner with local schools and, you know, support museums, install speed traps that we donate to the police department if that's whatever they need. We really focus on the outcomes, not, "Hey, we wanna spend $100,000 in this town." Right? We're focusing on learning what their pain points are and making their lives better, and that gives them a lot of confidence and trust as in us as a developer and a good corporate citizen. The 4th is proven partners. You know, as a first-time developer of HPC and AI infrastructure, Having that credibility and that trust with the end customer is really hard to build without having incredible partners.
We work with some of the best names in the industry, Turner Construction on GCs. We work with Corgan on A&E. We work with GT Law. Like, we work with very top firms in everything that we do, firms that deliver for hyperscalers and have a track record of delivering for hyperscalers for years or decades. When we sit down at a table and we're having a conversation, you know, they're not just relying on us, they're relying on, you know, the fact that they've worked with these other counterparties for decades across multiple projects, and they're vouching for us. Having those proven partners is not only about de-risking the execution, it's about building the confidence and the trust in the end customers that we're gonna be able to deliver. The last thing is future-proof designs.
You know, we were, I think, the first company in the space to talk about really focusing our efforts on building out Vera Rubin infrastructure. Most of the industry has been signing leases for Blackwell infrastructure. They're very, very different in terms of their engineering requirements. By the time our sites are built in 2027, the Blackwells will be a very outdated GPU. Our focus is making sure that we're building for the data center needs for when the data center is complete, not for the data center needs of today, because those are fundamentally different. We also believe that that's going to create more value for customers by solving a higher value problem. It's gonna be more valuable to deploy a Vera Rubin than it will be to Blackwell in 2027.
Finishing here, I think, on this slide, the value that we create for shareholders. Really, management is focusing on executing across three leases this year: Panther Creek, Sharon, Moses Lake. We have the visibility on permits and customer conversations to make those statements and be confident in those statements that that's what we're aiming to do. That's what we are going to get done this year, that's going to be the big, huge value driver that we believe for shareholders this year, is executing across leases. We believe executing across a single lease will be a big driver for the share price. We have three leases to execute this year. We hope that there are three big, you know, impacts on the stock. You can see here in this gray box right here.
Well, that's reflecting the wrong way. You can see here in this gray box, this is the value per megawatt, the EV per megawatt of the companies who have not announced a lease. Just by announcing a lease, you generally have a re-rating event. We would think that by signing the first lease, if not, you know, all three leases, should bring us in from this category somewhere into this category, creating a huge uplift for shareholders. We think that the second big area that we create value for shareholders is converting our expansion capacity over here into secured capacity. When you look at how we're valued today, I think we're probably properly valued under this framework of how much power do we have secured for 2027 and you're a Bitcoin miner.
I think that's the framework for a fair valuation of our share price today. I think we're getting ascribed little to no value on our ability to sign leases and to deliver megawatts. When you look at our value, there should be a big value driver from executing on these leases over here and bringing into this category. There should also be a big value driver on executing against projects like Scrub grass, where we have about 60% of our energy pipeline that we believe we're getting little to no value for. Executing across that and securing that capacity and bringing it into this bucket, we believe is gonna be the second big value driver for shareholders.
The third is going to happen in 2027 when you actually deliver these megawatts to customers, the assets become stabilized, and you're generating revenues. Those are kind of the 3 big value drivers we're focusing on for shareholders. One, two for 26. Number 3 for 2027. With that, I think that summarizes the presentation. I'm sorry if that was very repetitive to whoever was listening to the Q4 call yesterday, it has been only one day, and there have not been material updates in the last 24 hours. It did have to be a little redundant. I'm happy to dig into questions. You have myself and Jonathan here to answer whatever you'd like to ask. Yes.
Yeah. Again, can you just frame out maybe the confidence in the leases? Because you're going from those last permits, at least on Panther Creek, mid to late summer timeframe, and then leases by end of the year. I guess just frame how advanced can the discussions be so far, that we can have confidence that it's going to be hitting mid that timeframe?
Yeah. It's a great question. I'll repeat it for whoever's watching on the live stream. The question is, how do you have confidence in our ability to sign leases given where we are with the permitting timelines? The reality is that when you are going through a lease negotiation for a site like this, you can't go through the commercialization efforts too early because nobody's gonna wanna spend time, you know, evaluating and doing due diligence on a data center project that never comes to fruition, right? You have to wait till you've had significant milestones and progress and success, and you have a clear path that you can execute against, you know, with documentation and studies and reports and your meetings lined up.
You have to be able to demonstrate not only your success, but the clear path to how you get over the finish line, before a customer really wants to engage. You don't want to wait though until you've cleared permits because if you clear permits and then you're starting this multiple months long process of due diligence, evaluation, and site negotiation, what you're going to do is you're just going to unnecessarily delay the time for you to turn your compute on in that site and timeline to energization is the biggest value driver in any of these conversations. The sweet spot is to wait till you've kind of where you are, we are right now, where you have cleared zoning, you've got a clear path forward, and you know, you have confident that you're going to be able to execute in the coming months.
That's kind of the sweet spot for lease negotiations in our view, where we're balancing out risk and timelines. You know, what happens in a development like this is when you do clear a permit, like for instance, when we cleared zoning, we got a lot of inbound interest from potential tenants because there are so few megawatts in the areas that we have our energy portfolio that we found not only are investors paying attention to the permits, but potential tenants are paying attention to the permits and they're saying, "Hey, you just cleared this thing. I can see you're having progress. You know, it's time for us to be a little bit more serious in these negotiations and I want to start digging in further." We've been at the commercialization for a couple of months now.
It's really accelerating now that we're past zoning and we're making good progress on development and environmental. Given our location of the sites, we think the location in itself is almost a natural inferring of the demand because there's no alternative option for another 350 MW site 2 hours away from New York. That alternative does not exist.
Sorry.
No, no, please.
Maybe just on, I noticed like, you know, a little bit smaller site, would there, would you guys have any interest in doing your own GPUs there on like an environmental cloud or fully in colo?
We talked about that on the Q3 call. For the listeners at home, the question was GPU service at Moses Lake and whether or not there was any interest there. We talked about this on the Q3 call originally there was interest for the company to do GPU as a Service at Moses Lake. When we talked about on the Q3 call, we talked about the benefits of doing GPU as a Service. What that did was similar to getting zoning approved at a site that actually facilitated a lot of interest inbound for Moses Lake from mostly from neoclouds who said, "Look at all these benefits on deploying GPU as a Service here.
This is the right kind of location features and scale for me." Based on all of the inbound demand, we said, "Why are we trying to complicate our business and run two businesses?" We don't want to run a GPU as a Service business and infrastructure business. We really just want to focus on infrastructure and given enough demand for the Moses Lake site, there's no reason to overcomplicate the story and try and do two things when we can just focus on doing the one.
Understood. You had mentioned the Vera Rubin versus Blackwell. Some purists have said other sites can support both. I guess, where do you see that, if that's the case? Would you expect a different kind of CapEx overlay for the infrastructure, obviously for the GPUs, but for the sites you're building out, how much north of like a $9 to $11 9 per critical MW you typically see with what you would expect?
Yeah. The question was, what's the difference in engineering requirements between Vera Rubin and Blackwell and can a Blackwell infrastructure support a Vera Rubin? What is the relative cost differences between the two? The required energy densities to do a full Vera Rubin rack build out as Nvidia specifies versus a Blackwell are fundamentally different. I mean, you're talking about more than a 100% increase in energy density. You know, if you were going to accommodate that in a Blackwell facility, what you would have to do is you basically have to half fill every rack, right? You could potentially accommodate, but you wouldn't maximize. And you probably wouldn't have the full utilization of your megawatts if you are only utilizing half of your racks. That's kind of the natural implication there.
It would be very unusual for a company to overbuild for that flexibility because the yield on cost is what is the most important factor here. Nobody wants to price in and pay for, you know, equipment and expenses and features that nobody's going to utilize because you're going to lock yourself in for a multi-year agreement where you're not planning on doing any upgrades. Building flexibility into the designs is not cost effective in our view and really degrades yield on cost. I don't think it's very likely that anyone would be able to accommodate a Vera Rubin in a Blackwell the way that they originally designed their data centers. There may be ways that they can kind of force it and make it work, but it's going to be suboptimal all the way through.
That's not what you want for the most bleeding edge compute on the planet to have a suboptimal kind of hacky solution for its deployment. I'm sure we'll see it, but it's definitely not what hyperscalers are going to sign up for. The second question was on relative costs. The reality is that the costs are still subject to be determined for Vera Rubin. Nobody's actually built a huge industrial scale Vera Rubin data center site as of today. The first Vera Rubins are actually just coming off the production line this quarter, and it's unclear even who's getting those allocations, if Nvidia is taking the first few months or if they're allocating them to select customers. We have had the Nvidia reference architecture for Vera Rubin for a long time, but it's not fixed, right?
It's updating, it's changing, it's evolving. If you go out to any one of these companies and you say, "Actually, give me a very precise cost per megawatt for what this will cost to build," nobody can tell you a precise cost per megawatt because the engineering requirements from Nvidia are changing in real time. We don't know exactly what this is going to cost per megawatt relative to a Blackwell. A lot of the infrastructure is the exact same equipment, but some of it's not. Things like the electrical systems are very different. 800 volt DC will have some efficiency improvements over a 480 volt three-phase AC system, but there's also gonna be different costs associated with it. Some are gonna be higher, some are gonna be lower.
What's really clear though is that the deltas in cost between a Blackwell and a Vera Rubin are not really project specific. I think they're commoditized. Anyone building a Vera Rubin should be experiencing the same kind of cost deltas as anyone else building a Vera Rubin relative to a Blackwell.
Yeah. For a new generation of chip to work, it has to improve productivity obviously at the chip level, but also total cost of ownership level, right? It doesn't help a customer for Nvidia to develop a chip that's so expensive to house that they don't experience any productivity improvement.
Yes.
Sorry. You can't answer? Like, is it nine to 11?
No one can answer that question.
Can't answer.
No one can. You go to Amazon, Google, Microsoft, they can't answer that question.
I mean, do you think you're gonna have to answer the question in the next few months in order to sign new agreements?
You will have to get there when you're getting through the basis of design, which is still changing right now. The reality is, like I said, these GPUs have just not been deployed. This is such cutting edge technology that what Nvidia will do is they'll come out with their reference architecture and they'll build kind of demonstration and research and development labs to accommodate the higher levels of energy density. They'll do that at like a one to a two MW scale, right? They'll run the designs through that. They'll start distributing and they'll start deploying it, but they'll get feedback from their customers as they're deploying the GPUs, and they may make modifications to the reference architecture or to the GPU itself, right? It's an evolving process.
Are you expected to go to a potential tenant with a design in hand, or they are bringing you a design? How does that dynamic work?
Actually, it's both. You're expected to have a design, but you're also expected to accommodate a design if one is brought to you. It's both.
Do you expect to have availability of workers in order to be able to construct? Or is there enough supply in the Pennsylvania area right now?
Yeah. Sorry for not repeating your questions for the online viewers. The question was around labor availability in Pennsylvania and Washington, and whether or not that's a concern. We know that's a huge issue in Texas, where there's been, you know, explosive growth for training. That is not the same issue in Pennsylvania, Washington, or Quebec. There's plenty of electricians, carpenters, pipe fitters. Labor is not in short supply in those three markets. Any other questions?
I guess this is going back to the Vera Rubin with first Blackwell, though. If you just think about the sort of the maintenance CapEx kind of long term, right? Then you start getting return of value on the sites, right? Like, do you guys think it's a little theoretical, but like 10 years out, 15 years out, are these data centers gonna need to be, like, fully rebuilt to accommodate whatever architecture then? Do you think, you know, maintenance CapEx ultimately comes a lot lower and we get to a certain point where power density is kind of patched out?
You know, that's been the case for the last 15 years, is you kind of go through a cycle, and once you're done through a cycle, you're gonna need to make an evaluation whether or not it's economically justified to upgrade the data center or not. I think most times it is. 'Cause most of the times the requirements have moved on over 10 or 15 years, right? This is not a stagnant space. This is technology. The difference here is in this phase of the cycle, the change is so exponential. The question is, what is the limit on that growth? I mean, already Nvidia is gone from 10 to 50 kW per rack to 150 to 375 with Vera Rubin, and there's rumors around going up to a 1 MW in a rack. There is a limit.
There is a physical limit on how much energy density that you can have. If you would've asked, you know, any one data center engineer from Amazon or Google or whatever five years ago about us building 350 kW energy dense racks five years ago, they would've said, "You're crazy." You know, there would've been no ability to even imagine an energy density like that back then. I don't know where the limit is. There is clearly a limit drawn by physics at some point in time that will cap that energy density from going higher. If history is a guide, yeah, you'll probably need to make major investments at the end of the cycle to retrofit and upgrade.
you know, what's interesting about it, if you think across other capital intensive sort of energy subverticals, in many industries, it's sort of, I'll call it the repowering decision, is complex.
Because, you might have a 2x cost or productivity improvement versus the previous generation, and you have to make that fit inside of, you know, the economics of the site. In the case of, you know, GPUs over 10 years, I would assume it's at least one order of magnitude, if not more. Someone else would have to figure that out. Orders of magnitude productivity improvement, that's a lot of room to work with on making a repowering decision. Please.
You guys, you talked about, like, the scarcity of something like the 350 MW site. Can you speak a little bit on the repeatability of the business model and kind of where you see, I guess the company going after you get to RFPs? Like, your sites are up and running in 2027. For 2028, you kind of really only have that 96 slated.
Yeah. The, the question was around the repeatability of the business model. You know, obviously we've got, I think, great assets across Pennsylvania, Quebec, and Washington, and converting those over is very accretive to shareholders with the right customer contracts. Our core competency as a team and as a business for the last eight years has been finding new power opportunities and developing those new power land opportunities into energy infrastructure. I think that's what we're best at. I also think that's one of the higher value problems that we actually solve for tenants.
You know, if you actually look at, the value creation, you know, cycle, you actually look at the problems that we're solving for tenants, I don't think that we are creating a tremendous amount of value for a hyperscaler by doing our own architecture and engineering, for instance. Right? Hyperscalers have their own architecture and engineering teams. They have their own plans. They like their own plans, and they like their sites to be standardized, right? I don't think we're providing a whole lot of value by designing a data center ourselves to try and market to somebody like that. I also don't think we provide a whole lot of value in managing the construction process versus a Google, but we do provide a lot of value in solving the timelines of that, right?
We didn't have a priority on growth. We actually deprioritized it for the last 12, 18 months because we had just acquired the Stronghold assets. We wanted to focus on the US We wanted to divest our Latin American assets. We wanted to do the redomiciliation. We wanted to rebrand. There was a lot of things that we need to execute on as a business, and we had plenty of megawatts to develop against that, you know, we didn't wanna focus on getting more power in 2029 and 2030 when we had so much to do for 2026 and 2027 and 2028. That's our focus.
Now that we're at this point where, you know, we did all these things that we said that we were gonna do with the rebrand, the redomiciliation, the divestiture of LatAm, the shutdown of all those Latin American operations, and the real pivot to the United States, and getting to the point where we are with, you know, clear confidence on timelines and moving forward, we do now have a view on growth again, and we will circle back on growth. We're not going to be growing in the same way where we have in the past, where we're looking at, you know, smaller sites.
If we're going to acquire an additional site to put into our pipeline, it's going to align with the timelines of our existing development so that it's accretive and not distracting, and it's going to be, you know, Panther Creek or greater scale.
Maybe if I could squeeze one more in on I think you guys are one of the first to mention enterprise customers asking about your sites. Are you guys seeing, like, the same sort of kind of rates and terms that you would see from a hyperscaler? Just curious what you're hearing from them.
The question was about enterprise demand and economics. You know, I can't get into the economics on any one particular conversation that we're having, but what I can tell you is that the calculation from the tenant is very different in an enterprise versus a hyperscaler or a neocloud. When a hyperscaler or a neocloud is looking at, you know, the data center economics, they're looking at how much they can charge for the compute. There is a ceiling on the price of compute that people are willing to pay, right? There is no floor, right? You can go all the way down to zero. People will use free compute as much as possible, but there is a ceiling on what people will pay.
When it comes to enterprise is different because they're not valuing what they can sell a token for. They're valuing what the token can enable in their organization as an outcome. What we've seen is that enterprises are applying AI into their businesses, financial institutions especially, where they have a very strong economic incentive to generate a higher return, minimize their risk, be faster, identify, you know, some trend or some data point that everyone else is missing. They're the ones who are really investing heavily in this technology and have a very strong economic incentive around controlling their own technology stack and their own software stack and data center because it's becoming the lifeblood of their business, and they're not going to risk letting somebody else control that.
You know, there is a ceiling on the price of a token. I don't think there's a ceiling on the value you can create with tokens. We are at time, appreciate everyone's questions. Thank you very much for coming, I'll pop off in the back if you'd like to, you know, ask me anything one on one for a bit. Thank you.