Dialed in online as well. We are set to go, I believe, to 3:40. So we'll reserve five-10 minutes for Q&A at the end here. With that, we'd love to get into it, and then we can, yeah, pull up some of the stuff on the website as well as needed. I guess first off, the biggest part for Applied is, at least in my view, this upcoming HPC site.
Yep.
So, would love to, I guess, just start there with an update on this. You know, there's this an anchor tenant that you guys have discussed before, any kind of updated color there, and then kind of where financing stands and what you can talk through on that.
Sure. I'll talk as much as I can about it. So we announced that we had signed an LOI and given an exclusivity period to a U.S.-based hyperscaler. We announced that in April. I think I said publicly on my earnings call that we expect to, you know, get from that LOI to contract in kind of a 60-90-day time frame. So we're cruising through that. The building, which I know Rich in the back is working on putting up here, we've made really big progress. We talked about breaking ground on this, you know, last October. Now the building's fully enclosed. It's hard through pictures to see the scale, but it's pretty impressive in person. So we're still on track with that contracting process. From a financing perspective, we're running that in parallel.
We're seeing, you know, really strong interest in the financing. So I feel really confident that those are gonna come together, really well.
That's great. And could you just remind those in the audience as much background as you can on this anchor tenant? Obviously, I don't think the name's been released, but.
Yeah, we haven't released the name, but, you know, we said U.S.-based hyperscaler. So in my mind, and maybe you could add some to the list, but that's basically five companies. And so, you know, I think that makes a pretty narrow group. And it is for—you'll see when the building comes up there, it's for that build. That's the first 100-MW building. And then it's also for the entire campus, which we expect to have 400 MW critical IT load. We've talked about kind of economics of this in the past, around $2 million of revenue per year per megawatt and about $1 million of EBITDA. That still, I think, holds true here. But I do like having this up for the people that can see.
This gives you a shot of two things. So the new HPC data center is the large building. And then you see our Bitcoin facility in the background, just to give you an idea of what the scale is versus, you know, kind of the current operations that we're running.
What % completed is the HPC site?
So it's right now, that's a little bit out of date, but it's almost fully enclosed at this point. And so you go from the way we did the construction here, you can still kind of see some of the pillars, but these have concrete pillars that are the base that are driven down about 60 ft or 70 ft into the ground. And then you do tilt-up concrete. And so the concrete walls were being manufactured offsite for the past four months. They get trucked in. And I think at one point there was like 75 semi-trucks per day rolling those in and tilting them up. And so now you'll go into or we'll go into fit-out. So you'll be doing electrical, you know, climate, all of the things that need to go into the facility to operate.
Got it. And so I guess how much CAPEX still needs to be done on this?
A significant number. So we have a little over 100, I think around 130 into the building, right now. But this build will, you know, we've talked about this publicly, kind of like, you know, $8 million a megawatt without gen. If this one, you know, requires gen, you add another $1 million or $2 million bucks a megawatt if you're doing backup gen. So right now it's being built without gen. And on the bottom floor, it's three levels. Bottom floor is all mechanical networking. And there'll be a massive room of lithium-ion battery, like, backup. So there's a lot of redundancy being built into this facility.
Got it.
But let me get to—I think where you were trying to go is, you know, we still have a lot of the CAPEX. You know, my expectation is we'll get something in the 80% loan-to-cost, so, you know, we've put the vast majority of the equity into this building already. You know, kind of things that we're seeing for capital cost expectation on the debt, it should be maybe like SOFR plus 250 somewhere in that neighborhood.
Mm-hmm.
But it's scheduled to turn on the first capacity December and then be fully on in Q2. So it'll go right now, the schedule is 50 MW, 25 MW, 25 MW. And then if we continue with the full campus, then it's gonna be 50 MW per quarter until that, until that's done.
Got it. Okay. And the full campus is under the current you have the power capacity for that.
Yes.
Okay. Great. And I believe we had penciled in like $6 million-$7 million per megawatt before. What is kind of the $8 million number now? What is there just?
So it's the requirements, especially as you move. So think of it this way. Now there's the headline number that I give, and then there is you get something called NRCs, right, that the customer pays for so to bring that number back down. But I'm giving you kind of the headline number. But the cost goes up on this depending on primarily on what you have to do from electricity from a redundancy perspective. So if you're doing, you know, N+1, N+2, you're doing backup gen, though, like I mentioned, can be fairly expensive. And, you know, the climate control here, this is a liquid-cooled facility, chillers, you know, and we've learned a lot, but it's still gonna hit so confident in hitting the economics we're looking for, which is really the key here for us.
Got it. Understood. That's great. And the other site, the Bitcoin mining site, so Ellendale, walk us through kind of any updates on the outage there.
Sure.
There were some transformer issues in the past, but.
Yep. So we had a transformer issue in January. Yeah, we thought we had it fixed and turned back on, I think, in February. We had some more issues. What we ended up having to do is replace all the transformers onsite. And I know for, you know, for anyone who is my shareholder and for me too, it felt like it took a long time. But to get, I think it was 54 replacement transformers for the step-downs. You can see next to the buildings, you see the small squares next to the Bitcoin mining buildings. That's what we're talking about is the transformers. So we were able to procure those, get them delivered, get all the equipment. And now we've swapped all of them.
Now, the issue we did have, this is more detail than you probably need, is what we were able to get was 2.5-MW transformers, and we're replacing 3.5-MW transformers. So we have enough transformers there. We have, I think as of today, we're 125 MW running there. And then the remainder will happen over the next few weeks as we pair some of those two halves to make fives to get the site fully back online.
Got it.
But we swapped. We had some foreign-made transformers that we swapped for some GE transformers that we found.
Is there still ongoing plans to recover some of that, I guess, from the original?
Yes. Yeah. We won't discuss that too much, but there's definitely ongoing plans.
Yeah. Understood. Okay. You said, 125 MW now.
Is running now. Yep.
Okay. Got it. That's great. If we could talk a little bit about the Sai Computing, or I guess there's a new name now.
Yep.
First off, on that 8-K that came out, is there anything kind of that changes that segment beyond just basically a name change?
No. So let me say the funny story about the name change and then the kind of the rest of it. So I always make a joke that but it's true. It's funny 'cause it's true is the name change was if I sat with three different people in the room, they would all say it differently. So I had Sai Computing, side computing, and then some would confuse it with the singer, Sia. And so it was we made that change, but it's not just for that. It was because people in the market know us as Applied Digital. Even we're out in the GPU business, they know us as Applied Digital. And so we made the change to really reflect how we've been going to market with that, you know, you'll notice the other piece there.
So that was named. We actually appointed board members as well. I've talked pretty openly about, you know, that at some point in the future, these, you know, are. They're a little bit different businesses, and they could be two companies.
Yep. Yep. Okay. That's great. So, so that piece of the business, if you can walk us through kind of, I guess, what you can share on cadence of GPU deliveries and remind the, the audience kind of what those expectations are, I guess, for the remainder of the calendar this year.
The calendar year. So just to level set in that business, so we had deployed. I won't update the exact number now, but I think as of our last call, we had six to eight. Yeah, something like that. So, we continue to deploy. I think our team's getting, you know, more efficient at that. The biggest change in that business is so we've been really successful in that business with a lot of the AI startups. And I've been talking, you know, very publicly about we're seeing, you know, it's exciting 'cause we're seeing enterprise come into the market. The real balance here going forward is not, you know, how the demand is still there. It's not how quickly can we deploy.
It's making sure we have the right customers, right, so that we need to make sure that we have the right mix of enterprise-type customers versus startup customers. I love the customers that we have now. Some of them are doing just fantastic. But there's just so there's two risks you're managing in that business. And one of it is, you know, the pricing curve over time and kind of technology obsolescence. And the other risk is, you know, your customer credit risk. And so we're gonna we need to manage that. And so the cadence is really gonna be driven by, you know, leveling the moving up into enterprise and other larger companies. And so that's really gonna drive that going forward.
So, talk to us a little bit about that shift. The enterprise customer versus VC-backed. How have those conversations gone?
It's, it's, we're having a lot of activity. I will say this. The one thing that is very different is it takes longer for these to close. There's more of a diligence process. There can be a POC where a lot of when, when we're signing with more of the startups, it was just like, how quickly can I get GPU capacity online? We're, you know, we're balancing that, but we're pretty far in the process there. I've mentioned before, I think we've had, you know, one customer in POC. We've had more with, with two that we, we've done. And, you know, the, the think of the category of companies is, is really like the, you know, more tech-oriented companies that are adding in something on the AI side.
But I think you're gonna see more and more and more where you go to where we go to market for there's, there's very specific requirements for like financial think, think banking, right? It has just different requirements. Think healthcare with the HIPAA requirements. So we're, we're kind of targeting and siloing these products the way they should. But this is really gonna be driven by, you know, that, that enterprise part of the market.
Does anything change from an infrastructure standpoint to service those customers?
So some, no. Some, yes. And it's not really the infra part. It's more on the software layer side as to what, you know, the companies need, especially as you start moving down. You know, companies are a lot of companies. So if you're I don't know the number. Like if you're 35 or younger, you've probably never operated in anything but a cloud-native environment. And so what the product needs to do is operate with the company's current infrastructure. So if they're using AWS, Azure, or GCP, it needs to have all the hooks in, and it just needs to be an API that works right in the same, you know, infrastructure they already have. So there'll be some of those, but then some are really sophisticated, like the startups are that, you know, man just wanna bare metal service.
Mm-hmm.
There's always just, you know, on the hardware itself requirements. Depending on the customers, like when we're rolling out specifically for a customer, they might have a little bit different, you know, storage requirement or, you know, there's certain things where they might need, you know, some specialty servers they use to manage their offering. But for the most part, it's pretty similar.
Yep. In your view, do you think the venture backdrop has is still healthy for these VC startups? I guess what kind of led that shift? Is it just the?
So, look, there's somewhere we see it being extremely healthy with one of our customers. Just, it feels like they're asking for more, you know, every week or every other week. And then there's, you know, the thing that which was not a customer of ours, but there was a pretty high-profile AI startup that just kind of went away overnight recently on a GTC. So, you know, that along with, you know, watching some of the others is, you know, it's just we need to make sure that we're controlling the risk side on, you know, both sides. And then you'll have a new dynamic on this, which is so I mentioned there's really two big risks there.
It's the customer, right, going out of business or not needing the capacity any longer. Then two is the technological risk. So now, you know, you probably noticed we were included in NVIDIA's press release about Blackwell that will be one of I think there was 10 companies included in that that'll have early access. So now starting to think about, you know, what are we gonna be deploying six months from now and eight months from now, and then, you know, kind of reading through the customer subset of what we're doing there. But yeah. So there's both of those pieces that we're managing.
Got it. That, that's great. That actually leads me to my next question. But if you could just highlight some of the partner status you have with NVIDIA and, I guess how that's been able to, allow you to capture maybe low-hanging fruit or allow you to advance more in the world.
Sure. So a couple of things there. So we got access to H100s really early, I think, relative to almost anyone. We had, you know, a 1024 running in early July of last year. That has propelled us into kind of our own relationship there. We get, you know, I think they treat us great. I think they run a really high-quality organization. But I think the, you know, the evidence of us being included in that Blackwell announcement, you know, kind of shows the relationship that we have there. The other thing that we have done that was helpful with them is our DC team last year spent a lot of time on this building with them, because NVIDIA does have a DC design and a procurement team, but spent a lot of time with them designing this building.
You know, I think it's gonna be a great structure when it's done.
That's great. And is there still plans to migrate some of your own GPUs into the site, or is?
So it gets a little trickier. We think that's probably gonna be a little bit further down the road. We still have third-party colocation that we're doing for the ADC, the Applied Digital Cloud. So, you know, right now, we've under LOI is basically this entire site, so wouldn't put any of ours in this site.
Does it feel that the current LOI, if there's demand, to do maybe 200 more megawatts? I know the full site would be 400 MW.
Yeah. So the full site's 400 MW. The LOI is for the full site. The demand environment on the data center side is extraordinarily high right now. So we, you know, we have this site we've talked about, I think, close to 2 GW, in our power portfolio. And so we're kind of onto, you know, marketing the next pieces, but we're seeing a huge amount of demand on the data center side. It's pretty insane, actually.
Understood. And when you do think on the power side 'cause you're kind of still in both businesses where there still is a Bitcoin mining piece, do you see the demand for AI kind of pushing out Bitcoin mining over time where, like, you look at those guys, they basically need power costs below $0.04/kWh.
Yep. Yep.
To be profitable. Do you see AI pushing out the needs for Bitcoin mining data centers, or maybe those sell off to companies like Applied where you can get access to way down the road more cheap power, or?
Yeah. So the Bitcoin sites, just from, I think we know it really well, some of those sites will not, I don't think, work for HPC or for AI. And the reason is, you know, with Bitcoin, it's another joke I like to tell, but it's, you know, you can almost do, like an AOL dial-up connection. You know this. Like the connectivity just doesn't matter that much. And you'll see him, you know, using some kind of wireless backhaul or even Starlink, and all of that works. Where with AI, even though you can play around a lot more with the latency, you still need really good fiber bandwidth going to the site. You need, and not just one. You need redundant routes, preferably two to three.
So when we look for these we weren't looking for it when we found Ellendale, right? We were looking for Bitcoin mining. We're, we're lucky with Ellendale in that it has, I think, four or five providers that can give us redundant routes there. So a lot of the Bitcoin sites will, I think, just be permanently Bitcoin. And this goes back to, like, our, our Texas site that we sold, right? That, that can never be an HPC site for me. So that was a lot less strategic than what I have in North Dakota. Now, places that are Bitcoin sites that can move over, yeah, that I, I think it becomes maybe harder out in the future as all of this power is kind of sucked up.
I mean, I've seen some crazy stats around, I think it was one of the hyperscalers at a either a Morgan Stanley or JP Morgan conference recently said just with the three largest hyperscalers, they're gonna need 100 GW between now and like 2030. I mean, the what's the North American market? Like 23?
Right.
Like it's an insane number. And the only way to find that is, like, we're good at this. We go find this stranded power. But not all of the stranded power will work for this.
Mm-hmm. Is there interest in, I mean, I guess, what could you convert over at Ellendale to do HPC anyway? You still have to build a ground-up site.
Yep. You gotta build a ground-up site. You're not gonna convert, you know, the buildings. But like for us, we have a customer there at that site. But when, you know, that customer goes, you know, or, or maybe doesn't know or whatever it is, years and years out, you know, is the option there to build another building, connect it in, reuse the power. So it's, it's kind of this longer, you know, pipeline of power, kind of the existing Bitcoin capacity that we have.
Got it. And that contract is what, 4.5 more years, or?
Yeah. I think four, four, 4.5 more years.
Okay. I mean, is that a business? It seems like, you know, it seems like a decent business, the Bitcoin mining part.
Oh, it's a fantastic business, right? The returns on capital spent are just absolutely great, when your transformers are working. But it's the, you know, I think for us in the market, you know, it's the durability and how we get valued in the public markets, because people discount the duration. And then, you know, we also don't get the, like if we were a vertically integrated miner, then people buy miners because I'll put Bitcoin on my balance sheet.
I'm a big believer that Bitcoin—I'm not saying I am, but whoever would buy that stock is saying, "I think Bitcoin's going to $1 million, and I'll have all this Bitcoin on the balance sheet." So we're kind of in this middle weird place, where we, you know, you question the duration, and you don't have the Bitcoin upside exposure. So I think the really right choice for us is the longer duration assets, because the data center assets we're building kind of, you know, the same type of an asset, but the duration just gives you a much better value for it.
Yep. I think it's a great way to put it. As we are on the topic of valuation, CoreWeave obviously raised, I believe it was $19 billion. Just wanted to get kind of your sense as you think about the landscape and kind of the opportunity before Applied here.
Oh, I didn't I didn't know that. First time it's come up for me, the CoreWeave valuation. The, you know, look, that, that's those guys have gotten escape velocity. They have a lot of scale. I, you know, I think we're, we're behind. But if you look in that group, like, of call it new CSPs, I would, I would say we're top three, top four from a size perspective. We're kind of, you know, I think we get looks at opportunities that not everyone else gets because we at least have that validation for us. But as we as we go up market and we increase scale, you know, for ourselves, you know, that's a private market valuation. But those, those guys have, like, from what I understand, have done a great job of deploying a lot of GPUs.
But you still see other, you know, there's some other of the privates getting extremely high valuations, especially relative to us. But, you know, I don't know really what else there is to say about that. I think we just need more scale.
Yep. Yep. So it's been so I, I believe the first time you guys announced kind of entering AI HPC publicly, in a big way was a year ago at our May conference.
It was. Yeah.
So a year on, I guess what, what have you kind of learned about the business? What, where do you see kind of moving forward based on what you learned before? There have been some changes.
Sure.
Over the last year. We'd love to get to that.
So, I think that one, so we're talking specifically about the cloud business. You know, that was our literally the, I think, the day or two days before we were here at the conference, we signed that deal with what now we've announced as Character.AI. And that was, you know, it was really exciting for us getting into that market. You know, the way we got into that market, we were already running a small cloud service, right, at our North Dakota facility. But we were out marketing data center space, ran into, you know, that opportunity, signed it, and then we saw everything that was, you know, that was going on. We signed more customers.
You know, I went back to the real estate development team and said we were kind of working on 5 MW, 10 MW, and maybe 15 MW builds for HPC. And I'm like, "We it's time to go back to hundreds of megawatts like we did on Bitcoin mining." So that part exciting. And then, you know, it's one year later, we've learned a lot about deployment and operation. So we, like, right out of the gate for us, we announced that, signed it. We turned on a 1024 cluster July, right? That was really fast. So we're feeling really good about that deployment. Now, if you go back, we turned that on. We added InfiniBand later. So it started on Ethernet.
Mm-hmm.
And then the InfiniBand later. And InfiniBand is just, you know, that, that's probably technically one of the hardest parts of putting these up. The amount of cabling that goes on, you know, for even a 1,000-cluster, it's just like tens of miles of cable that you put inside the facility, tuning, optimization. So we have learned a lot. We've added a lot of great people to the team. But that, that's probably been the biggest pain point for us. You know, there was times where, you know, there's supply constraints, and there were those kind of things. But, you know, the from us receiving it to having it deployed and running, I think we're getting much more efficient at that. But that was probably the biggest part of the learning curve.
That was starting to see kind of, you know, the players in the market and how it evolves and, like, what, you know, the thought process of going from, like, just how many GPUs can I get online and how quickly to, "Okay, what is this market segment gonna need from a feature standpoint or another market segment?" So we've kind of evolved to that point now. And I think we talked about, you know, we've hired sales for the first time to go after the enterprise market. But that's all coming together very nicely. But it was definitely a big learning curve over the past year.
Yep. That's great. That's helpful. You had mentioned on the last earnings call, part of that deployment issue was just getting basically more folks hired and helping out with that. Ellendale, I believe, is a pretty remote location. Has there been any kind of just labor issue with getting folks up there with?
No, Ellendale is, you know, the site. We have a great contractor there that, you know, helped us with the Bitcoin facilities. We have another one that's out of Minneapolis for that site. They've been able to pull labor. That's a pretty resilient market from our ability or a pretty deep market than you would expect, from a labor perspective. The ones we're talking about, the deployment on, you know, GPU, none of that has really been going on in North Dakota. It's been in, you know, in Denver and in Salt Lake City and in these third-party colos. So it's just more of a matter of and you're gonna, I believe you'll see this free up over time. But this is, you know, we're a year into it. But it's all pretty new.
Like, when we were out hiring people, just finding people that had experience with large amounts of, you know, GPU compute was hard to find. You know, we were looking at national labs. We were looking at universities. These are the ones that were managing large amounts of this type of, you know, supercomputing effectively. So we've been able to pull a lot more in. But I think, you know, that the industry at large has, you know, like, been grappling with this a little bit, but it's getting better.
Got it. I have a couple more nitty-gritty questions, and then we can turn it over to the audience for some Q & A. I guess just first, if you could remind us kind of the walkthrough of the unit economics on the GPU side. We already got the HPC side.
Mm-hmm. So on GPU, this is H100. You know, we've talked about publicly here. Typically, depending on pricing, you're kind of on an annual basis per 1,000 GPUs, you're kind of in the $18 million-$20 million of revenue. And then these run at, you know, a 75%-80% EBITDA margin. And then when you drop down, depending on how you do the depreciation, you'll have noticed we're depreciating over two years currently. So that turns into a terrible op margin that in year three will go into a really fantastic op margin. But I think the right way to think about these is kind of like a five- to six-year life and then get the margin from that perspective. So, you know, you'd taken what's called between $40 million-$45 million for a H100 cluster fully InfiniBand, HGX.
And so when you, you know, can do the math on the depreciation of, you know, five or six years versus two.
Yep. Okay. Got it. And then, SG&A ramp with kind of the new folks in Denver and some of the colo sites and then, or the Ellendale sites. At some point, do we kind of get some operational leverage here? And I guess with most of the hiring, I guess until you if you add another 300 MW at that site, most of it's done by the end of calendar 2024?
The that site?
Yeah, for the 100 MW HPC site.
So when you think about from an SG&A perspective, the labor that goes onto the site will go into COGS, right, and not SG&A. But you know, we're using some we won't go into exactly who, but there's some of the stuff we're outsourcing from an operational perspective. You know, just don't wanna make sure we don't fall down on some of these things that we shouldn't. But that, you know, that site, like I said, it'll turn on in December and then finish turning on kind of Q2 of next year. But you should think about the from an ops perspective, that going into COGS. But we have built out we've added a lot of team members. You're gonna see the, you know, the GPU revenue really start to ramp up for us.
As you know, last quarter, we were paying for, I don't know, four of these clusters. And data center space, we weren't recognizing revenue for only one of them. And so you just have this massive drag. Well, that'll start to reverse as those go into RevRec, which they have. So, that'll be the biggest part where you'll see we have a lot of expense tied to a business that is, you know, generating very little revenue. And that, but the expense is gonna drive when the revenue rec starts happening over the, you know, next couple of quarters.
Got it. Understood. And, we're still kind of on the same timeline on the GPUs for the remainder of this year.
Yeah, there's been no change there.
Okay. Last point I do wanna discuss, and then we can take some Q & A, is, yeah, the new hires you've done. I kinda guess what I believe his name's Todd Gale, correct?
Yep.
Yep. So what is there kind of a the strategy more on, I guess, liquid cooling infrastructure? 'Cause I know that was some of his background. Is there anything on the power side as well as you guys still sourcing new sites and costs?
Yeah. So, Todd was a fantastic hire for us. He's built out, you know, been involved in building out hundreds of megawatts for hyperscalers. Just a fantastic background. I, I was really happy that, that he took the offer and, and joined. And it's just, it's kind of a perfect time for the company. And, Todd comes from Flexential, that is a big data center company, I think, actually based here in New York. So that was a really, really good hire for us, really good timing. And then, you know, for him, he has an, you know, a lot of experience with all parts of putting this together. But, you know, this facility is a liquid-cooled facility. Now, I, I get a lot of liquid cooling questions.
We don't in a facility like this or in anything that we will build, we don't decide what the liquid cooling solution is, right? We're providing the plumbing, the electrical. But, like, the actual, you know, people ask me a lot about which, you know, which company do I see leading in liquid cooling. We don't make that choice, if that makes sense.
So, the end customer's making that choice?
Yes.
Okay. Interesting. Do they have all different choices that you have to accommodate?
So the basic plumbing will accommodate a lot, right? But there will be some, you know, kind of as you go into final fit-out, depending on what the choice is.
Got it. Understood. Okay. With that, we got about seven minutes or so. Happy to take some, some questions from the audience.
I'll just follow up on what you commented on there with the liquid cooling deal for suppliers within the supply, the plumbing. Do you have any that you're looking at? Do you see the customers choosing predominantly?
We're still, we will see that. We haven't yet.
And that's otherwise pretty standardized equipment that you're able to supply.
Mm-hmm.
What are the power or the efficiency gains in the liquid services here?
So, I was talking to a company that provides a lot of, or is providing a liquid cooling solution. You know, and their thought was you can get down maybe to sub 1.1 on a PUE. We'll see. The good news in North Dakota, like, our air cooling in North Dakota is running, like, 1.17, which is, for people who don't geek out on data centers, a really low PUE. And so, like, that with the power, it works out really well there. But yeah, I don't, I mean, the name I hear come up quite a bit is Supermicro solution, but we're kind of close to them. So I don't, but I can tell you for sure who's really getting a lot of traction there.
The colo site that's air-cooled?
Our Jamestown site is air-cooled.
Yeah.
It’s running like I said. It’s been running, like, 1.1 now. I don’t even wanna quote the PUE. It needs to be averaged for the year because the wintertime PUE is really low, as you can imagine.
Yep. And then the third-party ones, those aren't using liquid cooling or aren't?
No.
No.
No.
Okay. Were there other questions?
I personally think the liquid cooling you'll see a lot in 2025. I mean, Blackwell's gonna I think that's gonna almost all be liquid cooling.
Mm-hmm. Do we have any more here?
Just a zooming out, how much would you guys going forward consider kind of moving out of town to spec? How much of this?
So, I think the everything we have going forward will be contracted before, with a timeline. We, you know, we took a risk here that I think is gonna pay off really, really well for us. On, you know, we started this on spec, and then now we're, you know, in contracting for it. But, you know, we needed to do something to, you know, break in here. And so we took that risk. But I would my expectation going forward is things will be contracted before we start building.
Is that just 'cause you're seeing so much demand out there that you'll pick it?
Yeah. There's a huge amount of demand. And then now we'll, you know, hopefully, we expect to get this over the finish line. And then we'll, you know, have had a, you know, a project with a hyperscaler and be building it. And so we'll, you know, just have, you know, the reputation to continue and market.
Great. Well, thank you, Wes, CEO of Applied Digital. Thank you for being here.
Thanks for having us.