Applied Digital Corporation (APLD)
NASDAQ: APLD · Real-Time Price · USD
31.09
-2.58 (-7.66%)
Apr 28, 2026, 12:58 PM EDT - Market open
← View all transcripts

Investor Day 2023

Oct 12, 2023

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Welcome to Applied's first-ever Investor Day. We're so thrilled you're all here. Applied Digital is not just a name or a corporation, it's a vision. It's a commitment to redefining how first movers, innovators, and digital leaders scale into high-performance compute. It's about collaboration, innovation, and reshaping the digital infrastructure landscape. Today's Investor Day is a testament to the immense progress that's been made, and it's a tribute to your belief in our mission. It's a celebration of the brilliant minds behind our success, our dedicated team, our visionary leadership, and of course, you, our valued investors. My name is Erin Kraxberger. I spearhead the customer and investor relations efforts here at Applied, and I'm thrilled to be your host today. Over the next few hours, you're going to hear about the transformative projects, partnerships, and technologies that are propelling Applied Digital to new heights.

We have some powerful updates from our next-generation data centers to our commitment to sustainable energy solutions. First, we're going to kick it all off with our Applied Digital's founder and CEO, Wes Cummins. He's excited to share an update on the state of the market, and please help me welcome him to the stage.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

I think that is on. I'm going to sit, if that's okay. Thanks, everybody, for joining us here today. I'm going to kick off by giving just a, a brief history for those of you who don't know of, of the company and kind of where we've been and how we've got here, and then what it looks like for us going forward. Today, you're going to hear from much more detail from some of our other execs around our AI cloud business and specifically our, our HPC data center business, and a little bit on our, on our blockchain data centers as well. The history of Applied. I, I founded the firm in early 2021. It's had a couple of, I would call it, iterations since then.

So in early 2021, we were going to be industrial-scale Ethereum mining, and we formed a partnership with a company called SparkPool, that at the time was the largest Ethereum miner in the world. They controlled about 25% of the Ethereum hash rate in their mining pool. So we partnered with SparkPool. I think that partnership happened in March. We raised money in April. By the end of May, and this was all expected to be deployed in China, by the way. By the end of May, the Chinese government had cracked down on crypto. And so at the time, you had about 70% of crypto mining going on in China, I think less than 5% in the U.S.

And so we saw an opportunity of a big migration from Bitcoin mining, specifically from China to the U.S. We'd already made these relationships. We had these partners. They came to us to see if we could go build out capacity. We had already found a guy who finds power in the U.S. He'd been doing it for about five years. One of the, I still think, the best in the business. We found a gentleman who had brought and developed and brought online one of the first, I think, the first, mining facility in the U.S. It was for a company called Mineco at the time, which you may know as Core Scientific. So we were able to hire those two plus others to go and develop these. We signed our first contract for power in July of 2021.

We ended up breaking ground for our first facility in September of 2021. And then between September of 2021 and now, so roughly 24 months, we've, you know, procured, done all of the development to bring to build and close our line, almost 500 MW of Bitcoin data centers, effectively. So that's, and by any measure, that's a massive amount of power to go and build out and do all that procurement. And we did it really quickly, and we built a great team around that. And so we have that business that's moving forward nicely. We expect to energize our Texas facility, we said by the 23rd of this month. So we're close.

And then, in around April, May of last year, of 2022, we started thinking about what else can we do with these power assets? And we decided, that, you know, designing an HPC facility, a high-performance compute facility, new design, complete new, you know, ultra-efficient design, that we could co-locate with our Bitcoin facilities. But at the time, you have to remember, this was a niche market. And so it was the idea was, we'll do five or 10 megawatts of this style of capacity at each one of our Bitcoin sites, and this will be a nice diversification and kind of market expansion for the company. And then in about this time, October of last year, we put some software partnerships in place and pieces of software to run a cloud service. We did that specifically to...

For one purpose. We're going to run a cloud service out of our own data centers because we thought we'd have to be our own first customer and show people that these data centers work. So it was, again, around selling the data center capacity. We started our first customers in December of last year. We were running mostly, you know, universities doing small what you would call at the time, machine learning or deep learning. And so we started that in December of last year. And then as this year progressed, you know, ChatGPT hits the market in December. It wasn't completely obvious to us at that point, kind of what the ramifications were of that. And then NVIDIA introduced the H100 in March, and then it became really apparent to us in April what was going on.

So kind of mid to late April, we were out marketing our data center capacity, and we ran into these cloud customers, people who wanted a cloud service. And we ended up signing that first customer in May, I think it was May fifteenth, and then we leaned in on cloud business from there. So we've signed up additional customers. Now, when I look at our business, the other piece that we had to do was we had to, you know, I went back to my team and said, "You know, we're not gonna do 5 and 10 MW on the sites. Guys, we got to go back to building hundreds of MW of capacity." And luckily, we'd signed up a significant amount of capacity to go and do that in both North Dakota.

We've been working in Utah to sign that up, and so we had that ready to go. And so it brings us to where we are now. So we've, you know, at this point, we've said publicly that we've signed up, you know, $378 million of annual contract value for AI customers. Brad will go through later the groundbreaking of our Ellendale site for our new data center. I don't want to steal that from him, but he'll be able to talk about that. But I wanted to set the groundwork for what we're doing and then turn it over to, you know, our CFO, and then some people who are doing these specific segments, to give you a much better idea of what we're doing.

Then I'll come back and talk about the market and what we see for demand afterwards. Let me give that intro, and then I'll turn it over to David.

David Rench
CFO, Applied Digital Corporation

Good morning, everyone. Thanks for joining. As we mentioned, I'm David Rench, CFO for Applied Digital. Yeah, APLD is an amazing company, and it's really the team that is the magic behind of how we've accomplished so much in such a little time. Really would like to thank everyone on our team that's made this possible. Legal disclaimers. So APLD, when you think about it, we have three very distinct business units within the company. The first is our newest line of business, which is accelerated compute or super computing as a service. We've ordered 34,000+ GPUs from NVIDIA through our partners, Dell, HP, and Supermicro. Our customers are some of the most cutting-edge and leading players in the space.

We're very excited to be growing this business and excited where that's gonna lead. The second vertical is our... and our CTO will dive into a lot deeper of what this entails in the morning, but, yeah, it's... I'm sorry, the next generation data centers really solve the issue of where accelerated compute will be hosted. NVIDIA's really changed the game of what compute looks like today and tomorrow, and so the vast majority of existing data centers cannot house this type of compute, and we're answering that question of where it can go and what that design looks like. Finally, our existing business model of colocation for hosting for Bitcoin miners.

Although we're not expanding this business any further, it's a great cash flow business for us and runs very smoothly at this point. You know, Marathon Digital is one of our largest customers there and continues to be a great business for us. Again, amazing what the people have been able to do in a short period of time. You know, we started with two and a half people, and now we're over 170 employees today, and really built out a team that answers and brings the resources to us. We continue to attract the best talent and be able to develop a great team to continue to execute. Over the last few quarters, we continued to accelerate our growth, and that's mainly because the contracted revenue continues to come online.

You can see through the history where we had, you know, Jamestown, we had that little hiccup where the transformer and the power company went out for a month or so, but we were able to come back and continue ramp. This shows Ellendale in the final quarter beginning to recognize some accelerated compute revenue. You know, with the energization of Garden City this month, as well as us standing up additional accelerated compute clusters, that trend should continue in the direction we've seen. So we're very excited about that. Now, again, as a reminder, the vast majority of our revenue is contracted take or pay, and long-term reserve. So we'll dive into, again, the three segments.

The blockchain colocation data centers, we have 480 megawatts of capacity. Two hundred and eighty are online today. We've announced October twenty-third is when Garden City will start energizing. When fully ramped, the segment EBITDA should be about $100 million. And we'll use the cash flow to continue funding developments for the company. The Sai Computing vertical, again, 34,000 NVIDIA H100 GPUs on order. We have over 30 megawatts of capacity ready to deploy those, so you'll see a fast ramp there, between now and the end of our fiscal year. You know, one cluster, we talked in clusters, is 1,024 GPUs, uses about 1.5 megawatts of energy.

We have a target, an estimated two-year payback for each cluster, and a six-year useful life. And one cluster produces about $18 million of anticipated revenue on a reserve contract. On the market demand, it would be much higher than that, but we've chosen to go out and get reserve capacity. HPC hosting. You know, we're building out 300 MW of projects. We'll continue to energize that. We have our first generation at Jamestown, 9 MW, and we've got projects in Utah and North Dakota that we're beginning to push dirt and get excited about where we're going with that. The question always comes up, how we're gonna finance that.

We believe that, you know, we're targeting 70%-80% of construction financing with an equity partner to come in for the remaining balance, and $6 million per megawatt for the CapEx there. Really just wanted to break this down into very simple building blocks so that you can work on your models and understand, you know, very specific, easy metrics here. For Bitcoin, 1 megawatt, we expect $625,000 in revenue and $208,000 on segment EBITDA. For HPC, revenue of $2 million per megawatt and a segment EBITDA of $900,000-$1 million.

Then AI Cloud, again, it's per cluster, the 1,024 GPUs, 1.5 MW, $18 million of annual revenue. And then we don't want to talk about EBITDA because the depreciation is such a large amount, but on a segment operating margin, 40%, when you get to scale. That really kind of runs through the, you know, basic financials for you. We'll have a Q&A question where we can ask questions later.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Thank you, David. As we were putting this day together, it was important to us that you obviously hear from our management team, but we also thought it would be great if you could hear from some of our partners and customers, too. So throughout our time together today, we're gonna share a number of videos, sharing some information and content from those people. We're going to start first with Jarrett Appleby. Jarrett has been digging into our company over the last two months. He runs a digital infrastructure advisory practice and has many insights to share. Although Jarrett couldn't be with us in person today, he did spend some time with me last week walking through his market perspective, and now we'll play that for you.

Jarrett, thank you so much for joining us today. We're really excited to get your feedback on some important aspects of what's going on in this industry. First, would you mind describing your background and experience for us?

Jarrett Appleby
Senior Advisor, Blackstone Group

Thanks, Erin, for inviting me. Sorry I couldn't be there live. I'm a senior advisor. Now I run an advisory business for the digital infrastructure space. Started out in the network world, you know, 30 years ago. Got into data centers about 15 years ago. Most recently, I was Chief Operating Officer for Digital Realty. I left there about 5 years ago, after 15 years in the industry, and I joined up. Now I'm a senior advisor to the Blackstone Group, and I work closely with emerging companies like Applied Digital. So, excited to be here today.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Awesome. If you wouldn't mind sharing, how did you first learn about Applied?

Jarrett Appleby
Senior Advisor, Blackstone Group

Well, I've been working with the NVIDIA ecosystem for the last four or five years with some of my clients, and was really excited to see the emergence of digital infrastructure players who had GPUs and were doing new products and solutions like a bare metal offering and such in the market. I actually supported Blackstone on their diligence on CoreWeave, and I looked around and said, "He's got a pretty interesting model." So I reached out to Wes, the CEO, and really had a discussion with him and what your strategy was and what you might be doing.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Perfect. And as a very obvious expert in the space, you alluded to a little bit, maybe I could get you to expand a little bit more about what was it about Applied specifically that made you want to connect?

Jarrett Appleby
Senior Advisor, Blackstone Group

Well, I think there's a couple of key trends in our ecosystem right now. One is the impact of AI machine learning on data center campus designs and the new product. So a whole new structural change in how buildings are being built to support the AI machine learning workloads. So I thought that was super interesting, critically, starting with the North Dakota campus and what's going on up there. Second, this emerging services, particularly bare metal and GPUs and getting control of that pipeline. It's a very scarce offering, capability, and I think Applied has some great capabilities and services they can offer. And then finally, the level of investment and the partnerships are very intriguing.

I've been working with the hyperscalers for 20 years, and they have a heavy reliance on these types of services, and it's super interesting to see the partnerships, including the NVIDIA Elite partnership, that you have in place.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Well, maybe let's take a step back a little bit and first talk about the overall data center marketplace. What is high-performance computing from an equipment and technical requirement standpoint?

Jarrett Appleby
Senior Advisor, Blackstone Group

Well, I think we've seen an evolution of workloads from enterprise solutions that really had a high dependency on networking and power densities were, you know, in the 3-4 kW/cabinet range. We saw this evolution of cloud availability zones, which are large deployments, you know, typically in the 18-36 MW deployments, power densities tripled or quadrupled to 8-10 kW/cab. But in today's world, it's about cooling and delivering high power density solutions that could easily be in the 40-50 kW/cab, or in some cases, even 120 or more we've seen. So the HPC AI world is the next generation. This is one of the biggest structural shifts that I've seen in my 30 years in the industry.

I think the folks who are in the leading edge, like Applied Digital, of creating a product and buildings and cooling solutions that can support this generation, are gonna be winners in the marketplace.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Who is it that needs GPUs, and what are most of the high-end GPUs being used for today?

Jarrett Appleby
Senior Advisor, Blackstone Group

Well, the ones we read about them in the press all the time, from the hyperscalers who've quickly pivoted in the last year. You've even seen some announcements where they had to stop, you know, their data center development programs for some time to retool the design, architecture, and supply chains to support it. So clearly, the hyperscalers all need these types of solutions and will partner up. But enterprises need it as well. It's-- this is a disruptor, and it can really support everything from financial services, healthcare, pharma. Pretty much every industry will have some type of AI machine learning dependencies.

I think the first generation is we're seeing a lot of the training workloads, which can be further away from city centers, but the real value, I think, is dual purpose, where you're closer in, you can support businesses nearby, and enterprise and, channel partners, managed service and channel partners.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Got it. And what are some of the ways, you know, the data center requirements differ from these hyperscalers we're speaking of to enterprise data centers?

Jarrett Appleby
Senior Advisor, Blackstone Group

I think one is scale. I mean, the enormous scale of hyperscalers, you can actually see campuses from a hyperscaler. I was at a conference recently, and they go, "I don't get out of bed for under 100 megawatts anymore." That's not what an enterprise would look... They, you know, they want a room or a cabinet, you know, type of solution, but now you're seeing whole buildings, and you're even seeing campuses that are 500 megawatts or a gigawatt even in today's market. The real estate's important, fiber's important from a site selection standpoint, but power costs and total cost to ownership and the ability to cool is really a differentiator in today's market.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

What would you say is the pace of change in the equipment and, like, where are all these technical requirements going?

Jarrett Appleby
Senior Advisor, Blackstone Group

Well, I think it's... We're early days. We're in early innings of the AI hype curve, and you see a lot going on, but I don't think everyone's, at least the clients and that we're working with around the world, don't have their final solution yet. That's why this transformation is so important. But they're experimenting, they're executing and delivering solutions to see what's gonna happen. And it, it'll take years to optimize the supply chain, the buildings that are built, the products and solutions, and how to maximize it. So it's an exciting opportunity in the industry, and yeah, I didn't see this scale a year ago. I just was at, again, another conference and a report. I think there's, like, 7 gigawatts of data center development going on right now to be delivered by, you know, 2026.

So that type of scale, and we've not seen ever, really. It's just a very structural change.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Applied is building and designing one of the world's largest GPU work clusters with keeping latency at the forefront. Can you provide some feedback on this new design?

Jarrett Appleby
Senior Advisor, Blackstone Group

Yeah. What, what's intriguing to me and, is the importance of latency and the network, neural network in the middle of all this. We saw campuses start out in the availability zones. They were single story, spread out pretty significantly in very large buildings to go to market quickly, and then people started stacking that design. In today's world, that, it's good potentially to be vertical, and so seeing Applied's solution, that it's vertically stacked to reduce latency, to improve information transfer, and the other piece of it is, it has to be always on in terms of you're running these GPU chips continuously and trying to be efficient until you've maintained. And that's a really different, it's an all-out type of deployment, which is very heavily dependent on the, the network in the system....

The performance of the GPU chips and the cooling technology that all have to play together. I'm finding this, the purpose-built Applied design, super interesting in, in this, in this first campus, and I think that's one of the new things we're gonna see. Early movers kinda use the existing data center and colocation space, but you're gonna need purpose-built AI machine learning data centers kinda going forward. We don't know all the answers, but it's good to be in front of it and testing these in the market.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

As we've been talking about, the industry is moving and changing really fast. So that's obviously creating many hurdles to meet this demand, you know, that is ever-flowing. The first that everyone's generally aware of is obviously GPU supplies. A second bottleneck is likely much more a longer term hurdle, and that's power availability. Can you provide some commentary on where Applied fits into this?

Jarrett Appleby
Senior Advisor, Blackstone Group

I think, number one, we all know the importance of latency and distance away from where the internet is. The internet lives in key peering hubs around the world, and when the cloud kinda started out, there were distance limitations, how far you're away. For certain AI workloads, we can really test that. That's where North Dakota comes into play, because it has cheap power, because, you know, it's a cooler environment and you can build very, very large scale, we're gonna see people go there. You know, I worked on a project a year and a half ago in middle of Pennsylvania, and it was because it was near nuclear power. When you get power costs, you know, that are—that could be, you know, 4 cents, type rate, it gets people's attention and at the scale.

And so we do, as a country and as globally, have limitations on the power side, and AI machine learning require much more. So I think that's really gonna be a shift in thinking, how far away from centers, and you can definitely use it for the training workloads. It's just a matter of dual purpose, how far they can be away.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

How do you see these major cloud companies and AI players hunting power and managing site selection? And how do you think Applied is situated to compete for these contracts?

Jarrett Appleby
Senior Advisor, Blackstone Group

I think the hyperscalers we see are great at this, in terms of... But the scale that's needed and the pipeline of new capacity needed, I think, caught all of us a little off guard in terms of solving for it, at least in the near term. So I think, you know, they've stated, again, some recent industry reports, they were hopeful to do two out of every three campuses would be self-builds. But frankly, I think what we're seeing is they're only able to do one out of three. So that means partners, at least at this point, who can provide power and build the right product and solutions for them, are available, you know, two-thirds of the market roughly, can be, you know, new providers or emerging players who can deliver for them.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

What would you say are some of the biggest risk factors facing HPC infrastructure providers? And with that in mind, how would you say that Applied is set up to tackle some of these issues or risks?

Jarrett Appleby
Senior Advisor, Blackstone Group

I think the cooling solutions are probably the technical things that we're seeing. It's undetermined, you know, what, what will come out on top. I think using a combination of air-cooled and liquid cooling is the way to go. I think AI, if you're using water-cooled, that's a real issue in some parts of the country where you're dealing. So water utilization efficiency is really big. And so the Applied team is really thinking through those, how to minimize water, how to take advantage of environments where they can provide a combination of air-cooled and liquid cooling. And liquid cooling, over time, I think closed-loop systems are coming. So I think the Applied team is looking at all those ideas and figuring out the best solution with their customers.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

So maybe one last question. I would love to know if you have any comments on where we are in this hype cycle, and how trends, demands, you know, are expected to evolve in the next, call it two-three years?

Jarrett Appleby
Senior Advisor, Blackstone Group

I mean, I think we have line of sight. If you talk to hyperscale clients and, and enterprise clients, we have pretty good line of sight, at least into 2026, 2027, you know, this cycle. It's undetermined, you know, after that. That's, you know... The evolution of these type of offerings is gonna be super interesting, and we don't know how long the GPU limitations are gonna last as well. I think in this window, though, of, you said two-three years, it's about taking advantage and fully utilizing, the GPUs that are available. That requires different models and, and different product solutions how to test the market. I think Applied's well-positioned there, among a few others, especially with the NVIDIA ecosystem. I think they're particularly strong in this phase.

The next phase that we wanna focus on, you know, with the executive team and the leadership team here, is building that next generation of partners. It's gonna be about partnering and a flexible model to support their growth and then, in turn, Applied's growth.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Well, thank you so much, Jarrett. It was incredibly insightful. Thank you for taking your time today. We really appreciate it.

Jarrett Appleby
Senior Advisor, Blackstone Group

Thank you, Erin.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

When trying to do something new, you must move fast. At Applied, it seems we don't really have a speed limit, we have a speed to market that we adhere to, and if you can't already tell, it's moving at warp speed. So how are we doing that? We're so glad you asked. Here to catch us up on the Applied development philosophy is Brad Barton. Brad has an impressive construction and design background, and we're glad to have him as our EVP of Real Estate Development. It's all yours, Brad.

Brad Barton
EVP of Real Estate Development, Applied Digital Corporation

Morning. Excited to be here, and I'm gonna take a little bit different of a speed. I'm a design and construction person in the real estate department. My favorite part of real estate is the physical asset. So we're gonna talk a little bit about that, but we're gonna stick a little high in theory, if we can. That's the clicker. Thank you. So, probably the best titles I do hold, husband and father of three. I think they're watching today, so I'm excited to hear what they think. But I have had some extensive background in design and construction of critical facilities. I may not look very old, but my entire career has been built in building data centers. From multi-billion dollar government data centers that I can't talk about to privately held data centers in the Texas region.

I want to focus on one of the most iconic buildings here in the great city of New York, the Empire State Building. Most of you know it, most of you have been to it, maybe some of you work in it. The crazy thing about this building is it was designed in weeks, and it was built in 12 months. I don't know about you, but that's absolutely insane. A building like this today would take a year and a half to two years to design and then five years to build. What's happened? Why are we slower today? Why does it take us so, so much time to build? Well, the reason why this project was a success, there's a lot of them, and I'm gonna focus on a few of them if you don't know the history of them.

It was right before the Great Depression. There was an abundance of labor availability and skilled workmen at that. It was a repeatable design. It's copy-paste of every single floor. They did use pre-fabrication. There were less building codes. Safety, unfortunately, wasn't even a factor. There were quite a bit of fatalities on the job, and material was readily available. Today, I wanna talk about the lessons that I'm personally learning from this build and how we can kind of apply it at Applied Digital. So, where we're at today is we're a segmented market. We've pulled apart design, and we've pulled apart construction, and what that's created is a massive gap. We're all specialized. There used to be one design company.

I think I've hired like 15 for my latest build, and what's in between that gap are labor shortages, new tech and design that hasn't even been invented yet, that we're designing for. Thousands and thousands of building codes. Thankfully, OSHA and safety's involved, but it doesn't make things go faster. And our good friends at Toyota taught us the beauty of just-in-time inventory. That's a wonderful thing, that was awesome, but it also makes construction so hard. If you want a generator, you're waiting almost two years for a generator right now. That's a long time. So what this gap has caused us to do is design kind of looks like this chart over here. It's pretty squiggly and pretty gnarly and ugly in the beginning. And honestly, this is kind of what it looked like on our first purpose-built data center.

We're talking to all the OEMs, we're talking to manufacturers, we're talking to vendors, Supermicro, NVIDIA, a lot of tenants and clients out there. What do you want? What are you seeing? What does the market look like? You just heard, you just heard from Jarrett, that this is a brand new, almost industrial revolution. We don't know what we don't know yet. So it did kind of look like this. But then what the industry does is they want you to deliver a lot of these standard packages, schematic design, development design, and hopefully it coincides with your construction schedule. Most of the time, it doesn't. There's a standard set of specs and everything that's issued. It doesn't really line up and help you go quicker. If any of you are architectural engineers, I, I do not mean to offend.

So what are we doing in Applied Digital to bridge that gap? Can we change the industry? No, we can't. But there are small, little things that we can do that can help us pivot to go a little bit faster without sacrificing quality, safety, schedule, and cost. I'm gonna talk about a few of those. So we design to the actual delivery method. Instead of calling it an SD package or a DD package, sometimes those make it onto the drawings, but we call them package. This right here is a sample of how we're scheduling an underground, a foundation that we want to pour. Well, if you want to pour your foundation, you don't need just a schematic design. You need your schematic design that includes the following definitions and programs. Does it cause you to make some guesses? It does.

Does it cause you to pull some things in forward? It absolutely does. You want to pour a slab, you got to know your underground. You've got to know what chillers are going there. You've got to know all the things in front. So how do we do that? This is an actual slide of some chillers out there. The one in red is a very reputable, large chiller manufacturer, probably cooling this building, and the one in green at 27 weeks is probably cooling the building next door. It's just as reputable. But we happened to find a size and a quality manufacturer that could meet our lead time. So we designed to that piece of equipment. At Applied Digital, we're too young, we're too nimble to be married to one specific vendor. Do we need the quality and the spec to be met? Absolutely.

But these three can meet it. It just changes our spec a little bit. So can we design all around that? Absolutely. So how, how do we learn about this? As much as we want to claim that we're the all-seeing eye and we're the leading market, we are in some regards, and we're not in others. When we go work in North Dakota, we go work in other municipalities, we don't know what we don't know. So the early involvement of contractors and trade partners, try to get them involved even earlier than this, is key. They let us know what the labor market's like. They let us know, "Don't buy those lugs because it won't, it won't, it won't, it won't come in 52 weeks.

That gear is available, but the connections aren't." That's great insight for us to get at the very beginning of design, not a hard bid scenario where we go out and say, "Tell me what it costs and make it cheaper." We bring them in the front. Another way is going back to what we talked about with the Empire State Building. This is the traditional design process for steel. SEOR is the structural engineer of record. They spend all this time delivering these standard packages and working under an architect, typically. Then we hire a general contractor, and then they hire a fabricator. Fabricator is a third-party designer that takes the drawings and then draws his own fabrication details of how it can go to a mill to be processed and built for the steel building we're standing in today.

That's a lot of people in going back and forth, and all those arrows represent changes, design iterations. Someone wants a bathroom on floor five. Someone wants their office to be a certain way, and it's got to have certain weights. So things change, and there's a lot of room for errors. This leads to a lot of time. Now, this next graph I'm going to show you, it's going to promise 10 weeks of saving. That's pretty aggressive. I haven't seen it yet, but we have seen some weeks of savings where one of our contractors... We hire the structural engineer under the architect to draw. He then has a separate contract underneath the fabricator to design. So he's taking his singular model to bring it all the way through to design, construction, fabrication, and almost erection. That seems to eliminate a really easy step.

Can other people do this? Absolutely. Will they? No. People like the way that we've traditionally done it because it's comfortable. Yeah, it costs more, and, yeah, it's longer, but sometimes it's funner to complain about things than actually fix them. We're willing to go and challenge this. Will we get 10 weeks? Maybe. Can we get more? Could we get less? Yeah, but any savings and efficiency in construction time solved in design is the best way to go. So another thing that the Empire State Building used was prefabrication. They didn't use these high technological things that we're using today. They used vellum and blue paper, and people drew them by hand. We have BIM capabilities, which is Building Information Technology. We're able to draw things and fabricate it in a shop off-site and bring it to a rural North Dakota site.

Our aim for this next build is to remove 30%-50% of labor off-site, so that, one, it can be safer, it can be cleaner, it can be better quality, and then put into place later. If we can accomplish that, our, our goals are very attainable. We're already setting up some partnerships with some fabrication shops to help us do things like our underground pier caps. Our pier caps are quite big on a building this size. Our multi-trade racks that we're looking at building, it only makes sense. Have your trades work in a heated warehouse during the winter, ship them on a truck, put them into place. It's kind of plug-and-play. There's a little more work put to it, but that's about it. So going to, going to pivot here and now go into the data center construction. That's kind of all been theory.

Data centers have been around for a very long time, and they're very complex. If you were to look at a set of plans, they'd look like a big, wide, open space with some rooms that surround it, and it doesn't look very complicated, but it is. For those of you who've been in the data center space, you know that cooling and electrical are, are, are paramount. Really, to put it simplicity, a data center is lots of power in and a lot of heat out. How do you guarantee that if I'm going to host your machines? Well, there's something been invented called a service lease agreement, an SLA, and they've been improving over the past 20, 30 years of what it guarantees to people.

One of the biggest things, we were talking about it at dinner last night, one of the biggest things is those four or five nines of uptime. I cannot go down. People cannot miss their Instagram feed. We have to stay up. Well, at Applied Digital, we're challenging that. The AI world is challenging that. Do you really need to be up that long? The answer still remains to be seen, but we're, we're hedging our bet on no. This new AI, AI work cluster doesn't require full uptime, just like it doesn't require the massive amounts of latency, which we are actually able to capture up in North Dakota. We'll talk about that later. So kind of goes into the history of data centers, and I'm going to go really high level through here.

Some of you are my seniors, so please correct me if I'm wrong, but in the early 1990s, data centers have been around since the 1970s and 1980s. They've been supercomputers. But really, we're going to start in the early 1990s. This is when the IT closets were in the four walls of a building, and a great company called Digital Realty and GI Partners figured out, "Hey, we could probably grab those IT closets and put them in a co-located space and charge people money for it.

... They were small per kW leases, really easy, really quick start, and they did great. They went public in the mid-2000s at $12 a share. Right now they're in, someone may watch the market better than me, but they're, they're, they're a big company. They're one of the biggest REITs in the country. So had we bought shares in 2002, 2003, when they went public, we'd be doing just fine. And I often wonder, why didn't I do that? Well, it was because I was, like, 10. I was, I was scoring touchdowns or maybe lack thereof. Just, I was too young. Right now, we have the opportunity, and I'm obviously very bullish on our company. We have the opportunity to invest in a new industrial revolution.

The IT closets are in these old legacy data centers, and Applied Digital is building a new asset, a new jump. I think we went public at a similar price. I think this is a great opportunity to jump in. Pivoting next to what we're doing. We've talked about this Ellendale data center. This is our next build. We are breaking ground very soon. I was just up in North Dakota on Monday with our general contractor. We're mobilizing within the next few weeks. It is the winter, and we're making provisions to build during the winter. This is a purpose-built AI brain in a building, just like Jarrett was talking about. We're going to have 100 megawatts of building load, all delivered within one singular building.

We are targeting a rack density at a minimum of 45, all the way up to 150. Your cooling mediums do change at those, so we're designing for one-two cooling mediums. It is purpose-built for AI workloads, and we're targeting a PUE that's lower than 1.2. But right now we're willing to say it's, it's going to be at least 1.2. Harvesting that cold air up in North Dakota is a wonderful thing for us. We want to emphasize, though, that we're not going to be a one-trick pony. My real estate design and construction team, procurement, site selection, have been very busy, and we have a pipeline that is very busy, enough to keep us busy through that season, like Jarrett said, all the way out, probably into 2030.

There's so much power we've been able to locate. We've been very fortunate to have, like Wes mentioned, the right individuals on the team to find some power for us. We want to develop that building and future iterations of it on campuses for now and years to come. We'd love for you guys to join us along the ride, and thank you for your time today.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Thanks so much, Brad. So you've already heard some high-level overviews about the different business units that make up Applied. Each unit, while a valuable part of the expansive ecosystem here at Applied, each has its own story and opportunities. So that's why we want to let each of our unit leaders take a few minutes to walk through their units and give some expertise on what they've been seeing. We're going to begin first with the area that got it all started, our hosting unit. But before we do so, we want to give you a little customer perspective.

Jim Crawford
COO, Marathon Digital Holdings

My name is Jim Crawford. I'm the Chief Operating Officer at Marathon. We are a large, publicly traded miner. We are currently operating facilities throughout the United States, the UAE, and other locations. Currently operating about 600 megawatts of facilities, again, spread out geographically. You know, we are really focused on our technology stack. You know, we like-- We're developing our own pool, our own firmware, our own hardware, you know, and doing some, you know, strategic investments in other areas that, you know, are ancillary to mining. Very focused on the, on the innovative side of mining and continuing to grow our operations. I think we're the only large miner, you know, who runs their own pool. I believe we're one of the few who are developing their own firmware. There's some other initiatives internally on the hardware side, some cooling technologies.

But, you know, I think what that allows us to do at our scale, you know, we can eke out small productivity gains here and there through just, through their own technologies. That kind of just gives us a strategic advantage. Doing the small things well shows kind of a level of maturity and professionalism in an operation, and Applied does those things very well. So it was evident, you know, in our early discussions before we even turned a miner on, and that continues today. We're a large miner, but that means is we buy a lot of mining machines, ASICs. And, you know, our model, when we first started engaging with Applied, was to work with third-party partners and extend our expertise on the mining side and let them, you know, do the hosting side.

What we found with Applied is, is a partner who had the capacity, kind of had the forward-thinking approach to secure the right types of facilities to operate these ASIC units in, and we found that we've had a very collaborative relationship. We really appreciate Applied's kind of operational professionalism. They've done a great job, you know, sourcing, training, local talent, very solid operating teams. We've worked with numerous other partners, and we're, we're very pleased with, with kind of the SOP they outline at their facilities. They've done a great job kind of engaging the local community. You know, in a lot of areas, Bitcoin mining is an unknown, so engaging the community early and, you know, getting their support is instrumental in a successful launch of this facility. They've done a great job at that. It's been great.

I guess, what... When I look at Applied and say, like, what the, on the operations side, you know, what is the value that Applied brings to our, to our organization? It's consistency of the operation. It's the stability of the hash rate. You know, aside from the obvious benefits of that, what that allows us to do is kind of allocate our internal resources to other initiatives, other facilities, other locations. You know, historically, we've had to have more of a hands-on approach at these facilities to make sure they're performing at a performance level that meets our standards. When you're not spending that extra bandwidth on the day-to-day hands-on operations, it allows you to focus on other initiatives.

You know, optimizing your operations, technology, innovation, implementation, eking out small gains, which at our scale can be very material. So really, I guess the value I see in Applied is it's really a best-in-class site operator. We signed our first agreement with Applied, I guess a little over a year ago. Energized our first miners early this year. What has surprised me about kind of the evolving relationship we have with Applied is their flexibility and their willingness to work on special projects, experiments outside of just the core mining that we originally engaged them for. Other facilities, other operators, you know, they're a little more rigid in their adherence to SOP. So having a forward-thinking partner who's flexible is something we certainly appreciate.

We have some pretty interesting projects going on with Applied, and most of those—all of those were not contemplated in the original agreement. Mining is ever-evolving, and, you know, having a partner who's willing to experiment, push the boundaries with the ground operations, allows us to innovate and allows us to push boundaries as a company. Predicting the future is always challenging in this industry, but, you know, I think Applied has been a great partner. As I said, they're very consistent. Their operations are run very professionally. There's not a lot of third-party hosts in this business right now, and if we had to choose one to continue to work with, continue to trust our most valuable assets with, it certainly would be Applied.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

There are so many incredible things happening in this space, but I'm going to let you hear it straight from the source. Nick Phillips, our EVP of Hosting Operations and Public Affairs.

Nick Phillips
EVP of Hosting Operations and Public Affairs, Applied Digital Corporation

I'm Nick Phillips. I'm the EVP of Hosting Operations and Public Affairs at Applied Digital. I get to work on the strategic long-term and midterm operations. I've got a team of 110 people who support me on a day-to-day basis at all of our facilities. The public affairs part of my job is I work with local, state, county, and federal government officials, whether it's legislators, regulators, or other folks, to help carry the mission about what Applied Digital is doing and educating them on topics, as well as working through any issues that come up in regards to permitting or other things that might hold up our business. If you take what we've built, right? We've constructed almost 500 megawatts of facilities in the last two years, two years and change.

We have turned on 300 megawatts of those facilities in the last two years and change. We've grown from... I, again, I was the fifth person to join the company. We're now about 170 almost. In, in that period of time, we have all sorts of great systems. We have all sorts of great operations. We have all sorts of ways that we watch things. We have customers, you know, our customers are really interested in what we're doing, but they're not that interested in micromanaging the day-to-day basis of their, their items because we do it for them.

I had a customer at some point call saying, "Hey, can you check all of our settings on all of our miners for us?" I said, "You don't need to, because I have an alert that pops up every 15 minutes if anything is not exactly the way that it's supposed to be." It just tells me I'm, I'm 100% confident knowing that the things are set up the way that they're supposed to be, and we're managing them properly because we have systems that we've built and put into place to do so. North Dakota, where we have two of our facilities representing about 300 megawatts, as well as West Texas, where we have a 200 megawatt facility, are really great locations for wind-generated power.

The challenge that there is in these areas is there's not a lot of load in those locations, so there's not a lot of people who are looking to take that power and utilize it. The second challenge that exists with those remote areas that we're located in is that there's not transmission lines to pull that power away from all that generation. By us building our facilities right where the power is being generated by all these wind turbines and everything that's going on, it really helps out the grid in a lot of ways. When there's too much wind, which means there's too much wind-generated power going on, we're able to absorb a lot of that. When there isn't too much, we're also able to pull from these transmission lines and be able to power our facilities at the same time.

It does a lot of things for the local communities, as well as it does a lot of things for us. From a financial standpoint, for us, if the windmills were to be curtailed, meaning they were unable to operate during a time where there's too much wind, then the wind turbines have to be shut down, and effectively, they're just not generating any revenue for those companies or for the communities in terms of tax or other items that they get paid on. Because we have our load in those areas, we're able to take advantage of very low pricing at times when there's a lot of wind being generated, and we're able to help the communities with revenue based on those turbines continuing to be able to spin. They don't have the sophistication of our systems that we've put into place.

We have very clean facilities. We spend a lot of time and energy on safety. We spend a lot of time and energy on optimizing energy usage. We have all sorts of systems for monitoring and managing all of the miners to make sure that they're mining to the right place all the time, which is a security risk that we manage very, very tightly, that they're up and operational as much as possible. We have very unique and creative ways of managing hash rate in a way that grid operators and utilities that we work with were able to very, very well function inside of the systems and what they need, which helps balance out the grid, which gets us this power pricing that we need to be able to be, you know, extra profitable. In terms of managing the mining business-...

We do it with a really small headcount, right? So we try to have a bunch of folks who are in these small rural towns that we operate inside of. We bring folks in who have experiences that are either mechanical or farming or electrical or other types of experiences, and we've set up systems, processes, trainings, and other resources for them to be able to manage these highly technical, highly sophisticated facilities in a very effective manner. We've managed to do that inside of very small towns, ranging from 500-17,000 people, which is where we operate.

We're able to find local workforce that's right there, and to be able to work with them and be able to run these facilities very efficiently without having that highly trained, multi-year, experienced type workers that come in and do things with us. While we're doing that, we're managing our SLAs. We're making sure that we're meeting our customers' needs. We're making sure that from a security standpoint, our customer equipment is doing what it's supposed to be doing all of the time. There's not a case at, at any time where I'm ever worried about that. About two years ago, when we started doing this, it was just an idea, right? We went from, "Hey, we're gonna go build a 100-megawatt facility in rural North Dakota," to now, today, we have got one of the, the world's best, most well-run facilities.

We've got almost 500 megawatts of facilities constructed. 300 megawatts of those facilities are online today, the rest coming online soon. We have systems in place that manage and monitor everything. We take care of our, our customers' equipment very tightly. We're able to see what's going on all the time and respond very quickly to any issues that are coming up. When you walk into our facilities, they're clean, they're well organized, they're properly cabled. We've taken what's been kind of the Wild West of, of blockchain and really tried to professionalize it and make it run like a, a professional data center does. That means cable management. That means monitoring systems. That means having, you know, a line of sight into every little thing that's going on and being able to manage it very tightly.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Time for a quick break. So we've got more snacks and more coffee right outside those doors, but please be back in about 15 minutes. Thank you.

Welcome back, everyone. So you've heard a lot about the opportunity and work that's building inside our HPC business unit. Here to give an update on what's happening there is Erik Grundstrom. He's our Vice President of HPC Infrastructure. At the end of his update, you'll also hear from a customer who's already seen a ton of benefit from their work with Applied. Now we'll let Erik speak.

Erik Grundstrom
VP of HPC Infrastructure, Applied Digital Corporation

Hi, my name is Erik Grundstrom, VP of HPC at Applied Digital. I initially met the Applied Digital team, working at an OEM, at a compute manufacturer, and getting to know them over time was a great experience. Going, meeting at shows and discussing architecture, deploying initial POC, which is now leased in Jamestown, was all fun. Applied garnered a great amount of support from the OEM that I was working for, both from myself, from an engineering perspective, and then from executive leadership as well. So it didn't take long. We started to build the first supercomputer together as soon as the AI explosion began and engaged with the end customer heavily. And from then I made the jump. I had to.

It was, you know, it's a once in a lifetime experience. Applied is doing things that nobody else has ever done. It's very exciting and I'm very happy to be here. So over the last two months, I've continued to harness and work with an amazing team, and leading this HPC and supercomputing effort, continue to build and to bring on new talent and everybody kind of mission-focused and so excited for what the future brings at Applied. So the first thing I want to talk about is something that we're all very keen and aware of, which is market growth and demand.

With demand currently exceeding supply, the expanding applicability of AI, as well as enterprise converging automation, analytics, and code, Applied is in a really unique position to harness and expand on our existing resources to meet the computing needs of today, whether that mean VC-backed, AI-focused companies or the enterprise. So we're expanding on both fronts. The fact that Applied is not a colo provider, we are not an AI as a service provider. We're not a platform as a service provider. We combine, you know, construction, facilities, architecture, design and maintenance involved in all of the physical infrastructure, but we also deploy world-class, bleeding-edge computing infrastructure.

For one company to manage both worlds together results in, you know, something where a customer can not only realize a certain degree of value that's not, not-- that doesn't exist elsewhere in the market, but it allows a great degree of flexibility, of ability to evolve over time, to change, to accommodate different workloads, different hardware as it comes in, whether that means we need to scale from 45 kW to 100 kW per rack, which we're planning on doing, or if we need to go beyond that. We'll have facilities to expand into any sort of cooling schema that is expected moving forward. Hardware is consuming more and more power as time goes on. That's apparent.

That is one thing that is, you know, we're on a linear curve, and all of the manufacturers will tell you the same thing. It is a major factor in the performance gains of the future, and Applied is designing new facilities and acquiring new power leases from state and local governments. That's going to allow us to do more than anybody else can do. There's no disconnect, right? Like, we all work together as this organism. We, you know, we—they're in our meetings, we're in their meetings. We're continually discussing optimization and the best way to deliver on this idea that is really, you know, that is a unique position.

We've had great success with customers signing on to what we do, that see the value in what we have in terms of operating both the physical environment that the servers operate in, as well as hands-on operate the compute infrastructure itself. I think the highest compliment that you could be paid is a referral from one customer to another, and we, you know, thankfully have had increasing demand as a result. So, a lot of these products that are end customer, a lot of the things that they are doing are, I... Anybody who's very familiar with what's happening in AI can tell you, are very exciting and, you know, expand on human creativity, expand on our ability to...

Articulate certain messages, expand our capabilities, as people, as you know, especially in the digital world that most of us spend a lot of our time in. So, yeah, the demand, the onboarding of new awesome customers, and the, you know, the continued expansion are all very exciting and are all remarkable achievements for a company that is this age. The second thing I want to talk about is technology differentiation. Applied's approach as a service provider to the industry is unique. We utilize power, real estate, construction, and compute capabilities to provide world-class architecture, both in terms of facilities and HPC infrastructure.

Our flexible supercomputer-as-a-service and bare metal-as-a-service offerings are catered to accommodate the world's most demanding workloads and are designed to offer precision performance, yet can be easily configured and redeployed through a combination of AI-powered, automated, and hands-on processes. Another area to highlight at Applied is our talent and expertise. Applied is building some of the world's most powerful supercomputers. I have spent decades deploying compute all around the world. I met CTO Mike, as well as HPC systems engineer, Raheem, during initial OEM engagements, and the thought of what they're doing really touched my inner geek. The thought of deploying supercomputing at scale, the thought of the workloads that science, applied mathematics, artificial intelligence, the ways that they will change our lives in the short term, has accelerated deployment and development of so many things.

So many things that, you know, we all will come to realize, that will come to, to shape our lives and our children's lives. But like, I think something even more low level that, that a lot of us who are into this type of stuff, the computing performance are into, is just the raw scale, the power, and the ability to do the math at, like, like nothing. You know, we... The big math, like, like nothing that has ever been done before. Building out computing architecture that can tackle the biggest, toughest, most complex problems that we are able to conceive of as a species, is really exciting.

I think that not only for myself, but for my colleagues and for my teammates, that is a big reason why we came on board at Applied. It's one thing to work with a supercomputer or if your company has a supercomputer, that's really fun. That allows a lot of, you know, creativity and thought and coding and tweaking workloads and tweaking compute performance to change things. What about when you have seven of them or nine or 21 of them? That's the direction that we're heading in. That is something that I think that, you know, is kind of like a...

It's been a bucket list thing for myself, and I think for many of us, really, to, like, be hands-on, to build, to maintain, to operate our own infrastructure like this. It's-- And it's not an opportunity many people have. So it really is a pretty wonderful experience engaging with all of the hardware and the capabilities that we have at Applied. We're very lucky to have that. So we have exceptional talent working within our group that includes HPC engineers and systems engineers from Apple, Meta, Cray, Oracle, and others. And our team shares that same passion about supercomputing and how HPC is helping change the world in so many ways. We continue to actively recruit the best, and we have excellent internal support in doing so.

The next thing I'd like to talk about is our innovation roadmap. Technology standards are constantly changing. Exascale computing and exabytes of data will soon be a thing of the past. Your iPhone a decade from now will have an exabyte of storage. Applied maintains deep relationships with the OEMs and the vendors that are changing the landscape today, and I think possibly more importantly, Applied is developing and growing deep relationships with the OEMs and the vendors that are gonna bear that standard in the industry tomorrow. Our facilities and computing infrastructure are designed with upgradeability, expansion, and accommodation of next-gen hardware in mind. From power and cooling delivery to automation, oversight, and compliance, Applied is focused on evolving at the pace of Silicon Valley at the scale of Texas.

So the big message would be this: There are a lot of companies out there that you can lease or purchase cloud computing from. There are some companies that you can lease a supercomputer from. There are some companies that you can lease data center space from, and even a smaller subset of companies that you can lease data center space from, where you can accommodate anything dense enough to be considered high-performance computing or supercomputing. What Applied does is all of those things together. There is nobody else in the world doing this today. That gives us several distinct advantages, from the value prop to the fact that we can go ahead and manipulate change on the fly, computing the way that it works for you, manage this as a service on the back end.

We are completely 100% unique in that position and makes us, you know,

... you know, definitely future market leaders, but the only company that can top to bottom accommodate HPC, AI, supercomputing as a service for, for AI-based customers and for enterprise alike.

Harel Boren
CEO, SwarmOne

SwarmOne as a company, is preparing to launch, a horizontal AI training platform for AI training at scale. And when I say at scale, I mean training for numerous customers, not only a colocation or single customer service. So I'd say kind of a fast, long tail of AI training. Well, my name is Harel Cohen. I'm CEO of, SwarmOne. Well, my experience in working with Applied was a continuously excellent experience, across all domains. Actually, from the very top of the organization, all the way to the shop floor, so to speak, to the server on the data center.

Now, with our many beta customers, some of them are creative power corporations, some of them are technology unicorns. We expect the technology to set a pivotal change in history in how AI training is actually done, across a broad array of industries, domains, frameworks, and so on. SwarmOne effectively eliminates the huge expense and friction associated with contemporary instance-based solutions for AI training. The large demand for AI training at scale for numerous customers calls for equally large GPU compute capacity that's available, that's continuously reliable, that's compliant with the most stringent certification requirements. Essentially, the combination of all of this caused us to seek Applied, and this is what caused us to select Applied as the top provider or a top provider in the field.

We understood Applied as the right solution after we performed a set of essentially three stages of tests with Applied bottom up. And it was all crowned by a varying load on and off loading of AI training capacity of all sorts and all kinds. And when we saw the performance, which we monitored for large lengths of time, and the sheer power of the services provided, it was very clear. The answer was clearly written. Applied was the right way to go. We found in Applied an ideal partner in promoting our vision and implementing our capacity to soon change the panorama of how AI training is going to be done.

As an organization keenly acquainted with the AI training field, that is Applied, and a true next-gen data center, Applied demonstrated an out-of-the-box integration with SwarmOne's platform. In addition to the huge amount of compute at its disposal, huge amount of power at its disposal, the numerous strategic relationships that it has throughout the hardware and services value chain, it all boils down to a second-to-none partner in capturing one of the largest markets on Earth, AI training for numerous customers in all industries.

We started our relationship with Applied. I was expecting, for a large organization, some of the experience that people have with such organizations and, you know, heavy decision making, lots of red tape style, which is very common in larger organizations, especially those that are experiencing accelerated growth like Applied. And what we discovered was the complete opposite. We discovered an agile, highly attentive, quick to respond and react set of people. It actually felt much more like a set of people than an organization in all levels. It was all very personal, very immediate in management, in financial, in the technology ranks, in the immediate implementation, in the follow-up.

That was a really, really great surprise and worked through all of our ranks, far beyond our wildest expectations. Well, we're heading towards a very, very large available market. $25 billion yearly available market, which is prone to multiply itself numerous times fold, multiplying itself once every 3.4 months, and some assessments, the most conservative one, put it at once every 6 months. Within that, we are about to launch a unique technology, solving a unique need in the hottest market on Earth, AI training. Our expectations within the next—for within the next 18 months, is that Applied will go hand in hand with us and accomplish this very high Everest, together with SwarmOne.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Our next p resenter has his hands on every aspect of the technology here at Applied. Here to talk about data center building specifications, security implications, and all of the technical components is our CTO, Mike Maniscalco.

Mike Maniscalco
CTO, Applied Digital Corporation

... Thanks, Erin, and thanks to all of you for being here this morning. I'm sitting in the back of the room all morning and listening to everybody speak, and I'm thinking to myself: What can I say that hasn't already been said? But in all honesty, I think it really speaks to the strength of our team and the quality of the team we've put together. So I'm thrilled to talk to you today about some of the work we're doing. You know, one thing I think we've touched on a fair bit here and there is... Go through the disclaimer and myself, is how quickly things have moved here.

I started with the company about, oh, started working with the team over two years ago, and I can tell you I'm more excited than ever about the work and the things that we're doing today. It's just a ton going on, a lot of really great people, and a problem that I believe is important to solve because we haven't really talked about the why of this equation a lot today. For those of you in the AI world or following the AI world, we're living in a world where it's, it's, it's funny because this is a line of our manifesto. It feels like innovation is racing against time, and that's an absolute truth.

We're living in the middle of this AI race to dominance, is the word I like to explain, where you've got the brightest minds in the world trying to deliver some amazing technologies. I think the eyes have been opened to what these technologies are capable of. This technology is not going to slow down. It's going to continue to move faster and faster, and the smartest minds in the world are working on these problems because they believe they have world-changing implications. To work alongside these people every single day, to deliver a digital foundation for them to do their work on, to me, is extremely exciting. If you're a world-leading AI researcher, what you want to do is you want to train your models. However, to train a model in today's world is extremely complex.

It's not just the PhDs at Stanford working on the math and the science and writing the algorithms and the user experiences. That's the front side of it. On the back side of the house, there's this enormous requirement for digital infrastructure, and these are the things that an AI researcher doesn't want to think about. They want to be turned over the keys to a supercomputer, and they want to go fast, and that's what we're providing at Applied Digital. To do this, this digital infrastructure requires a combination of power, compute, facilities, networking, cooling, a lot of different specialties. I think one thing that's really changed in this world is this AI explosion, driven what I think we all would agree could be called the ChatGPT effect. Because if you rewind to, what was it?

A year ago, when we were starting to build our Jamestown facility and really getting that up and running, there was a lot of excitement, but nothing like what we're seeing today. The ChatGPT effect essentially proved, or started to prove to the world what the possibilities were, but also that the capabilities of GPUs. Until that point, large language models and going larger and larger with these parameters and larger and larger with your training workloads, wasn't really apparent what that was gonna produce. But ChatGPT showed that adding more parameters, more compute to the equation, more GPUs to the equation, more power to the equation, that you're going to produce better outcomes, and that's changed everything. But it's not easy, and that's, I think, where really Applied Digital is coming in. We're looking at these complexities, and we're welcoming with open arms.

I think Eric was a perfect tee up to this conversation. Eric's on my team. Eric and the team he's built, they wouldn't be here if it wasn't for that complexity. That's what excites them. That's what allows us to really deliver these services I don't think many people in the world are capable of doing. We've built up a fantastic team. We're tackling power, we're tackling scale, we're tackling construction, Brad alluded to, networking challenges. We're talking about ultra-low latency networking. We're talking about building some of the largest supercomputers in the world. That's factual. The TOP 500 list is published, and the performance and capabilities, these large H100 clusters we're developing do that. To kind of remind everybody how we do that, there's essentially three different services we're offering.

Accelerated GPU compute as a service is turning over a handful of GPUs to somebody who just needs to rent it for a few hours, whether that's bursts for inference or a researcher or a team that's not ready to scale up to $hundreds of millions of compute. There's plenty of those guys out there in the world. But there are also some of these leading AI researchers who are ready to scale up to $hundreds of millions of compute, and they're looking for somebody who can deliver supercompute as a service. And that's one of the unique strengths of the services and the expertise that we've built to deliver to these customers.

Lastly, and I don't want to understate this because I think it's important, I'll talk about the trend a little later, we're delivering these colo next-generation data centers, specifically designed for some of the largest AI supercomputers looking forward, and they're just getting bigger. We do this by leveraging a lot of NVIDIA's technology. We are one of, I think, the few people in the world who are really deploying these NVIDIA HGX reference design clusters to our customers and turning that over at bare metal and then supporting them through the implementation of that. To get something like this in place isn't a one-man job. It's... And to go back to the team comment earlier, this is where I think we shine.

To get a cluster of this size, between 256 and 5,000 H100 cards in place, requires a number of specialties. It requires power as a specialty. It requires design and construction as a specialty. It requires networking, ultra-low latency as a specialty, compute as a specialty, storage as a specialty, and the list goes on and on and on of the team of people we have to put together to make this happen, and it really is impressive. I like to think about the work we're doing as turning pipes and air into threads of opportunity for our customers. A lot of people ask about what we're doing with our data centers, specifically in North Dakota, and why is that different? Well, we'll start with the network. When you think about AI training and workloads, there are basically two classifications.

You've got your training clusters, which are meant to compute on large amounts of data and then spit out an algorithm that can be delivered to customers to use for things like ChatGPT, or Character. AI. Then on the other side of the equation, once that model's trained, the way the customers interface that is through inference, where you're now interacting in your consumer app with that compute. Now, training and inference have two very, very, very different sets of requirements. Jarrett talked a lot about this in his interview. Training, all that compute is happening locally, so you don't need ultra-low latency to the data center. You don't need massive pipes to the data center. You need to upload a workload.

You want that workload to work effectively for some extended period of time, days, weeks, months, in some cases, and then you want that workload's results to be spit back out to you. During that time period, all the compute, all the communication is happening locally, so latencies from the LAN network, the internet side is somewhat irrelevant. But what is highly relevant is the latency within the data centers, and that's where this complexity comes in, and that's where the NVIDIA reference architecture comes in. We do all this on InfiniBand networking, which is ultra-low latency networking, specifically for AI training applications. And these are not insignificant lifts. Just in physical infrastructure alone, we have... Let me read this.

A 5,000 H100 cluster requires 250 kilometers of fiber for a total of 15,000 fiber cables, InfiniBand fiber cables. Just plugging these things in takes teams of people weeks. So it starts to highlight why an AI researcher doesn't wanna go through cabling, doesn't wanna think about power, they just wanna run their training algorithms. So I think another point quickly to highlight here, and I guess moving on to the next slide, is power, as simple as it seems, is truly the primary ingredient of all this innovation. I think Brad did a great job of talking about this. We are essentially building an AI brain in a building. We are, just like your brain, trying to condense an enormous amount of compute into a very, very small space.

The reason you do with these, with these clusters, is you have physical distance limitations to meet the latency demands inside that cluster for the nodes to communicate with each other. And that, that constraint is, in our world, 30 meters. All those nodes have to be within 30 meters of each other, which speaks to our proprietary design for the new data centers we're building. You go beyond that, and you also want to squeeze all that compute into tightly dense cabinets or racks.

To give you some perspective, if you were to go out and survey the market today and say, "Hey, data center in Metro Dallas, what's your idea of a high-density rack?" They're gonna come back and say, "Well, 5-12 kW is what we really require." Well, one of our servers with 8 H100 in it is 12 kW, and 1 kW, one server per rack, for hundreds of servers just isn't going to cut it. So we're looking at going to 45, or we are at 45, pushing the boundaries of air cooling, looking towards 150 kW per cabinet for our buildings, again, to condense the compute in tightly confined spaces, and that's extremely important. And then take another journey quickly.

Right now, I think everybody in this room probably knows this, constraints in the market are compute, so access to GPUs and access to InfiniBand networking. Those are the constraints in the market today. That's what's slowing everybody down. And I'll tell you, software developers aren't used to waiting, so they want it yesterday. I think that's—it's a very, very short-term limitation. The world's gonna solve its silicon problem, and there's going to be another problem that we're already seeing. There's not enough space in the world to put these clusters. There are not enough spaces in the world where you say: Hey, I want 45 kW per rack. They're gonna say: Our HVAC system can't handle that. And we need that in one data hall, very, very tightly packed.

They say, "Well, we can put one here and one here," and that's not gonna work. The next constraint is going to be space, which we've recognized. The next constraint is we're moving to a trend of gigawatt-scale data centers. Finding gigawatts of power in Metro Dallas, that's hard, and that's expensive, and that takes a lot of time. Another great one I like to point out is, if I need a gigawatt of, or let's just say 200 megawatts of power capacity in a metro area, and a new substation has to be built, and transmission lines may have to be run, that's gonna take five years and hundreds of millions of dollars... or sorry, millions of dollars a mile to get that power to the, to the data center.

What if you flip the script and said, "We're gonna go to where the power's at. We're gonna go where there's stranded power, and there's 200 megawatts available in six months." What if we go there? The challenge for most of the operators today, "Well, where's the network? Where's the connectivity?" Well, we can build fiber to those data centers in six months for tens of thousands of dollars a mile. It seems like a no-brainer to me. That's why I'm really excited about the work we're doing in North Dakota. Brad talked about SLAs. I will highlight that quickly. AI, tr- without getting into the detail, AI workloads are designed for failure.... They're designed to have bugs, they're designed to checkpoint, and they're designed to restart. As soon as they fail, they'll restart.

And I think we can really leverage that with our data center design by eliminating a lot of the SLAs, by saving money to the customer, by getting these clusters in their hands as fast as possible, because that is what they want. We don't have to wait for generators. We don't have to wait for UPSes. Yes, there will be some expected downtime, but when it's back up, the workload just picks up and starts running again. That's the future of AI. Yeah, I think that covers all about that. So to kind of wind up a little bit, clearly there are a lot of complexities, there are a lot of challenges. We're trying to bring super compute-level systems to our customers overnight, and you just don't turn on a supercomputer overnight.

It requires a strong team, it requires a lot of planning, it requires strong relationships. These are the things that Applied Digital brings to the table for our customers. We're solving the problems of power. We're solving the problems of scale. We're solving the problems of connectivity. We're solving the problems of doing your diligence on your OEMs and vendors. We're solving the problem of supporting that cluster once it's up and running. And we're doing this for AI innovators, large and small, and we're enabling to do really, really incredible things. So that really—I started this off by how excited I am to be here. Hopefully, that came off in my presentation today, because we're doing amazing things as a company, and we're offering this in a fantastic package of services for the future.

With that, I will say thank you, and, I think that's fine. Yeah, there we go.

Speaker 24

Building data center was actually a force of progress. What if a well-designed digital infrastructure served as a foundation where innovation could flourish? What if pipes and air were the threads of opportunity, and dynamic server configurations were catapults of transformation? What if power wasn't just a needed commodity, but the primary ingredient of innovation? In a world where speed to market matters, this is where you come to turn what if into innovation applied.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Okay, next up, we have Applied co-founder, Jason Zhang, to talk through Sai Computing.

Jason Zhang
Co-Founder, Applied Digital Corporation

Hi, everyone, my name is Jason Zhang. I'm one of the co-founders of Applied Digital. I'll be sharing an update and an overview of Sai Computing, which is our new cloud services business. We started incubating and building the fundamental parts of this business in 2022, and officially launched this business model in May of 2023. So Sai Computing is a wholly owned subsidiary of Applied Digital, and we offer specialized computing around GPUs. And we offer GPU cloud computing to end markets in high-performance computing and artificial intelligence, and a variety of other use cases that our end users are using the cloud computing for. But it's mainly focused on GPU compute resources and helping the ever-growing need in HPC. The cloud services offerings are focused on three major areas.

We have reserve compute, which is focused on much longer durations and much larger quantities of GPUs. Typically, what we do here is a six-month minimum contract length, all the way up to five to six years. It's typically a large-sized training cluster that we deploy on behalf of our customers, and they're using these for large-scale language modeling, training, or other types of model training workloads. We also offer other types of compute contracts. We have burst compute and also short-term compute. You can think of these as much shorter duration length in terms of the contract and also some on-demand capacity, our ecosystem partners that will allow users to test and do shorter-term compute workloads using our hardware and using our infrastructure.

There are a variety of GPUs that are a part of our portfolio offerings. We have the typical older generation, A40s, A6000, A100s, which were kind of the quintessential GPUs in 2022 and preceding years. 2023 has been really focused on H100s, and our deployments of H100s have really ramped up since June of this year. For future offerings, we're now working closely with NVIDIA to explore the deployments of the Grace Hopper, which is the GH200, and Hopper Next, which is the generation beyond GH200. We're also currently in the mix of deploying L40Ss, which is a redeployment or replatform of the L40, and it's going to be much more benchmarked to the A100 performance, but specifically used for inference workloads.

We're working with our customers and NVIDIA to do test unit deployments of these right now, so that we can scale these out in large scale for inference workloads. The reason why we have these different types of contract lengths and durations and different types of deployments is because there's never a one size fits all for all the end users. There are some customers who much rather wanna have a very large cluster that they build out. In order for us to commit to such a large CapEx expenditure on the equipment side, of course, it warrants the longer contract value and the longer contract duration. Those types of consumers of compute are, of course, going to be different than your on-demand type of needs, right?

Where sometimes you need burst capacity for certain workloads that are a couple of hours or even just a couple of days, instead of committing to a reserve compute that is multi-year. Sometimes you can just be in the market and absorb the burst capacity that is available in the market. So the on-demand and burst and the shorter term capacity is also very important because, one, it allows customers to get a glimpse of our offerings, but also allows them to have not as much upfront commitment, where they might not be as deep-pocketed as some of the large reserve customers. So again, it's a good mix so that we can better serve our end users.

On the short-term type contracts, we've partnered with a variety of different platform creators and software developers who are developing platforms to better execute and increase the utilization of these GPUs when they're in idle mode. So in these instances, we can better increase the utilization of existing equipment, but also offer attractive pricing and offer attractive deals for smaller customers who are just getting ramped up and also exposed to GPU compute. So in the GPU cloud revolution or this dynamic that has really taken off in the last 12 months, we've seen that location-specific type of workloads are less and less so, because these types of very compute-intensive workloads tend to be location-agnostic.

So it allows us, as a company that was previously very focused on finding power and then building out computational resources, really to play to our strengths, right? We can go out and find locations where the power availability and the cost of delivering that compute is much more attractive than your typical computational epicenters, like in the Bay Area or like on the East Coast around Virginia, right? So we have really put together a great offering from a geographical perspective, having locations around the Midwest and also Mountain U.S. regions, where we can take advantage of ample amounts of power and being able to deliver that power into computational resources in a more effective and cost-effective way for our customers. And because of these workloads that are a lot less location-specific, we can do that and take advantage of these opportunities.

As we are building out this cloud services offering, a very big component of the cost model is, of course, on the equipment and the facilities, right? We've partnered very closely with the largest equipment manufacturers and, of course, NVIDIA themselves, which provides the GPUs and the networking equipment to build out these clusters and build out these deployments. We're—we have a very close partnership with Supermicro. We also have very large orders in place with HPE and Dell, in addition to Supermicro. The key areas of differentiation for our GPU cloud services are in the following aspects: One, we've been one of the few cloud providers that have deployed H100 with InfiniBand and networking at scale.

We are deploying a couple of clusters that are ranging from three to even 8,000 H100 GPUs in one location, and these clusters are some of the first clusters in the world that are being deployed by NVIDIA customers. We're very fortunate to be one of the first to deploy these, but also working closely with our customers to work through a lot of the kinks that comes with deploying cutting-edge technology. We're also one of the only cloud providers that offers bare metal. In an instance, we hand over access to the actual servers to our end users and allow them to really control as much access as they would like to have when it comes to provisioning and using and utilizing the equipment.

We also have a team of very experienced HPC engineers, storage, and networking experts that help support our end users in these deployments. Again, we're doing something that has been done by very few companies in the world, and we need to be at bleeding edge and helping our customers figure out a lot of the early kinks that need to be figured out when you're deploying these cutting-edge equipments. The last point is on vertical integration. I'd like to touch on the aspect of Applied Digital's core business, which is building data centers, with the fact that Sai Computing is now deploying and building one of the largest and fastest-growing GPU cloud-specific operators in the world, where we can use Applied Digital to build GPU-specific facilities for Sai Computing.

This allows us to remedy a very important constraint in the market, which is data center capacity. As we grow Sai Computing and we can deploy those GPUs in our own facilities, that allows us to, again, be a lot more flexible on how we deploy, what size we deploy, what timelines we deploy these types of clusters for our end users, and we're not beholden to a third party that we work with or a third party that we have to contract capacity with. On the product roadmap, we are working today, again, in the bare metal offerings, where we provide the facility, provide the equipment, and then hand over access to those machines to our end users.

But as we scale out our business offering and our services offering, we'll start to have a lot more virtualization and container containerization orchestration tools that we built, that we will be building on top of the bare metal offerings. Again, these are additional offerings that continue to refine and improve this product offering, but it's not anything that's holding us back today. Again, a lot of our deployments today are bare metal deployments, and our customers are very satisfied with that because we are deploying with end users that typically are a lot more sophisticated and also have the internal infrastructure teams. So bare metal access is what they prefer and what they work well with. So as I mentioned before, we started the business and started incubating the idea in 2022, but didn't really launch it until May of 2023.

That was... The catalyst to that is, of course, our signing of our first large contract with Character. AI. That relationship has blossomed and expanded from that initial contract that we signed for 5,000 H100 GPUs. Character. AI, right off the bat, they were backed by some of the largest companies, VCs in the world, such as Google and a16z. They raised $150 million at a billion-dollar valuation, pre-product, and it has been absolutely amazing working with them to deploy one of the largest training clusters focused on H100 NVIDIA technology in the last couple of months. Here's a recap of what has unfolded in the last couple of months since we started working with Character. AI.

We initially signed the first compute contract with them at the end of May, and we started deploying that first cluster for them in June. This is unheard of in terms of the turnaround and the speed. We worked very closely with Supermicro and NVIDIA to deploy, as Character was a very key strategic account for NVIDIA, and Noam, of course, had connections all throughout NVIDIA's leadership. We were able to deploy that first 1,000 cluster for them within the first month of us signing that contract. Since then, we've scaled up our commitment from Character all the way to 10,000 GPUs and now expanding to 16,000 GPUs and beyond for 2024. Again, a very good example of how we've landed a key account, deployed and executed for them, and then expanded that relationship over time.

So in the last 12 months, we've seen HPC really grow from a very niche offering to something that is on top of everyone's mind. With this AI boom and generative AI really taking over everything that we've seen in business applications to consumer applications, we've really seen the demand for the fundamental layer that powers a lot of that really explode, right? Because all of these applications and all of these new models and new technologies is based fundamentally on the equipment and the facilities, and the computational resources that power all of these applications. We're lucky to be at the ground floor of all of this, and having built a GPU cloud business in a matter of a couple of months, where usually this takes many years, if not decades, to build, has been quite humbling for me to see.

And I've been super thrilled with the team that we've assembled to help pull these offerings together and deliver that offering to the market. And we've been overwhelmed by the amount of demand and interest from generative AI companies, large tech companies, research institutions, and all types of, different end users, that we've seen our demand and forecast over to NVIDIA really skyrocket from a couple thousand GPUs to now we're deploying 30,000+ GPUs before mid-2024. We started Applied Digital two and a half years ago, and we've seen that business really skyrocket and grow into something that is hardly resemblant of where we started on day one. Sai Computing is no different.

We've only been at it for four or five months, but we've already seen a lot of traction in the market, and we've basically built out a whole new business segment within Applied Digital in a matter of months, and we're super excited to see what the future holds for this business segment, but also for Applied Digital, broadly.

Cenly Chen
Corporate SVP, Strategic Sales and Managing Director of BV, Supermicro

Good day, dedicated staff of Applied Digital and the respected industry friends. I am Cenly Chen, the Corporate Senior Vice President of Strategic Sales.

... and the Managing Director of B.V at Supermicro. It fills me with the honor to stand before you today to discuss a partnership that brings vast promise and potential to us here at Supermicro. Our company, a collaboration with Applied Digital, has embarked on an exciting journey that promise alter the industry landscape and redefine the parameter of success. The exceptional aspect of our partnership with Applied Digital is that it is not merely an alliance of two companies. It represents great ideas, innovation, and shared value that will propel us both to unprecedented heights. We deeply value our shared vision with Applied Digital to develop resource-saving IT infrastructure, specifically designed for high-performance computing applications. Our goal is to deliver unparalleled solution to the market. As you may be aware, Supermicro has been one of the fastest growing IT and AI infrastructure companies in the world.

Our focus lies in green computing servers, storage, IoT, telco, and AI servers based on our unique building block solutions developed over the past three decades. Our impressive growth rate of 50% and 40% in 2022 and 2023 provides a snapshot of how well received our data center solutions are in the industry, specifically in the age of AI, where GPU systems are consuming more power than traditional CPU-only servers. Take, for example, the high demanded generative AI training systems such as NVIDIA HGX H100 based AI platform, which consumes up to 10 kW per system, and the forthcoming NVIDIA Grace Hopper-based SuperPOD AI training cluster. Those examples proves that Supermicro's green computing is the optimal choice for generative AI applications. We also provide unparalleled H100 PCIe, L40S, L40-based GPU system to address the growing inference market demands.

We're truly the leaders of the AI infrastructure, and with unwavering support and the partnership of Applied Digital, our collective success knows no bounds. Our experience with Applied Digital so far has been nothing short of amazing. This partnership represents not just a business venture for us at Supermicro, but also a commitment to a shared future. It's about reaching milestones, fostering innovation, driving business, and creating endurance value for our collective customers. Thank you for being such a steadfast partner to Supermicro and for your faith in us. Let's continue on this beautiful journey together.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Nearly every new digital innovation is proving to be extremely power hungry, and Applied has the power. Returning to the stage to talk about what's next for Applied and how the team is powering the world's next innovations is Wes Cummins. But first, a quick video.

Speaker 25

The speed that AI is going to move, I think, is going to catch everybody off guard because it's gonna compound really quickly. The systems will retrain themselves and make themselves move even faster and faster and faster. So you're looking at a large market, ton of activity, a lot of funding going in behind it, and I think an exponential, if not more, growth curve, right? It's gonna go up and to the right really, really quickly is, I think, what we hope and we believe. I just think it's massive. And it's not only massive in dollars and cents and the things that I think, you know, the sales team likes to think about, but I also think it's massive in the potential impact, because I think that these technologies have the opportunity to reshape society.

They're gonna do drug discovery, as we saw with COVID. They're going to do, you know, a lot of cancer research. And yes, they're gonna build some really great personal assistants and copilots and help people focus on what they really love, rather than the mundane details of the job, help people be creative. We're seeing the sparks of all of this, but I think the ability of these technologies to reshape society is also enormous, too.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Sure, I turned on again? Yeah, you guys can hear me? Perfect. So, to wrap up, and then we'll do some Q&A. So first, I wanna thank everyone. Hopefully, you know, a lot of people in the room and maybe some people are new to it, but everyone generally gets to see me, and I speak a lot at a lot of different events. So it's great to have other people at our company, other employees, see the full team. I'm super proud of the team that we've assembled here at the company and what they've accomplished. You know, we started, as Jason mentioned, and I mentioned earlier, we started two and a half years ago. Our August quarter, we just did a little over $36 million of revenue, so that was over 50% of what we did. That was...

So that was our first quarter. That was over 50% of what we did in the entire last fiscal year. We've guided for approximately $400 million this year, and I wanna make a couple of points for our company. I think what we have done extremely well since the start of our company is speed to market. That's been, you know, in the Bitcoin mining market, which, you know, I draw a lot of parallels to from that market to what we're seeing now in AI. Just what we saw at that time was. In 2021, everyone was rushing to get the Bitmain S19j Pro miner. It was, how many could you get? That was the big bottleneck.

Then as you went into 2022, there were warehouses full of S19j Pro miners looking for data center capacity to plug into. That's where we came in, and we built out, you know, our 500 MW of data center capacity. The stat I'd like to look at for this market that I think is—there's actually some parallel. I don't wanna compare them too much, but there's some parallel here where the forecast is, you know, roughly 1.2 million H100 chips for NVIDIA this year, somewhere in that neighborhood, two million for next year. So let's say 3.2. 3.2 million H100s through the end of 2024 needs about 4.8 GW of IT power, so that's, you know, probably total data center power around 6.5 GW.

So 6.5 gigawatts on those fully loaded through the end of 2024, compared to a data center market that's roughly 22 or 23 gigawatts worldwide, it's a massive step up, and then it should just keep going from there. So I think for us, we could land in a similar position. We're in the process of kicking off... You know, you got a lot on our new kind of AI brain data center, the importance of the network density and being within, you know, the magic number of 30 meters, and that entire building can be within, you know, that 30-meter radius of the network core. Which in theory, means that you could put roughly 60,000 H100 GPUs, same spine, same network core, same training cluster. There will be nothing like it in the world.

So we're excited about that, but I think we're stepping into a market where we have, you know, 300+ megawatts of power that's contracted, that'll come online, you know, next year. So I think we have a really special window here in the next 24-36 months, where power is going to be a massive supply constraint. And I... You know, it's not there yet, so when this started to happen, everyone, you know, ran out into the market, and us included, and contracted what was available, contracted everything through 2024. But I think when we see next year, we're gonna start to really run into the constraint of finding places to put, you know, these types of workloads.

And by the way, that was just the NVIDIA math, right? It doesn't account what, you know, AMD or Intel are going to do or just standard kind of data center data center growth. So I think we're in a really unique position. And to put a point on this, I won't say which power provider, but one of the largest utility networks in the country, our power guy was speaking to his friend there two days ago, and his friend mentioned that they had just received an inquiry for 900 MW of power. And that's a big number, and he said it's the eighth inquiry that they have received that's between 500 MW and 1 GW in the last two months. And it's all data center driven.

You know, I think we have a lot of exciting things going on. We're close to getting all of our blockchain data centers, you know, ramped up, hopefully, next two weeks. The AI cloud is growing quickly. We started-- you know, now we have large language model customers. We have text-to-image model customers, and a Copilot, you know, software Copilot customer. So we have a good growing customer set there, a really strong pipeline, as Jason mentioned, and I think that gets a lot of attention. Everyone's, you know, a lot of people for us have been concerned about the blockchain piece turning on, and so we're, you know, we have that, I think, solved at this point.

But I think the piece that gets overlooked for our company is the value of this contracted power and the land to go with that power. And, you know, in Ellendale, we have, or in North Dakota, we have the, you know, the building permits, and we've done the geotech, and we're getting ready to mobilize, as Brad said, and put the foundations in, for that building. So I think that piece of our company, which I think is probably the most valuable piece and most important piece, often gets overlooked, and it's gonna become, in my opinion, extremely apparent that, of how valuable that is, you know, over the next, you know, six to nine months.

I think everyone's gonna really see the power constraint that I don't think is completely recognized in this space right now. So with that, I think there's a lot of exciting things going on through our company. We've accomplished a lot. Super proud of our team. Glad you were all able to see some of our team members and leaders of our groups here. And Erin, I think we open up the Q&A now? Yeah. So we'll do Q&A. David, why don't you come up, too, in case anyone has financial questions.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Please raise your hand if you have any questions. Sorry.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

My mic isn't gonna...

David Rench
CFO, Applied Digital Corporation

Slide. Yeah, great.

Darren Aftahi
Managing Director and Senior Research Analyst, ROTH Capital Partners

Darren Aftahi from Roth. You talked about power. Can you just—beyond the 300 MW that's contracted, I'm sure you guys are doing a lot of site surveying of-

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Mm-hmm.

Darren Aftahi
Managing Director and Senior Research Analyst, ROTH Capital Partners

other power sites. Just how constrained is that beyond kind of the loads you already talked about?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

David, what was the message? Was it our pipeline is a little over a gigawatt outside of what's already contracted? We were reviewing that a couple of days ago. So I think it's around 1.1, 1.2 of things that are in what I would say, the pipeline of assessment. But like, I'd like to mention this. I liken it back again to Bitcoin. You know, I think it was aggressive power chasing when, you know, Bitcoin moved to the U.S., right? And I think it feels like 10x that right now. And, you know, we lived through that time as well. So, it's definitely everyone's looking.

I think we have a, you know, our guy and the way we've gone about this for, you know, the last three years or I guess 2.5 years, you know, we have a pretty good understanding about how to find those, what the, you know, kind of the elements that we need, you know, within that, which, you know, I don't, I won't name all those up here. But we have a pretty good idea, and I think we have a lead in being able to go out and find that power. But beyond what we have contracted, the pipeline that we were reviewing two days ago is about 1.1-1.2 gigawatts.

Speaker 19

Can you hear me?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. We got it.

Speaker 19

Thanks. You guys have had a guidance of paramount on the last couple of earnings calls and the last two presentations you've done. I was hoping you could unpack it a little bit further for us, because you've just completed your first fiscal quarter of 2024. The numbers you've talked about are significant-

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yep.

Speaker 19

- and there's obviously an implication is that there's an extraordinary ramp in the next couple of quarters. And so I'm hoping that you could maybe unpack it a little bit more for us, how we get from $10 million of adjusted EBITDA, for example, or, you know, whichever metric you think is most relevant to the numbers you're talking about on a trailing basis by the middle of 2024. And then maybe talk a little bit about your CapEx, your outlook for CapEx-

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Sure.

Speaker 19

and how that gets funded.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Sure. So when we talk about our fiscal 2024 guidance that we reiterated on Monday, there's really two out of the three businesses primarily contributing to that. So you have the blockchain, the data centers that we've said many times, when fully ramped, get to you know, roughly $300 million of revenue and $100 million of EBITDA based on that business. And so when you think about that on quarterly progression, you know, we had Ellendale for a part of the quarter in Q1 that we just reported, and we had all of Jamestown, and we didn't have any of our Garden City, Texas, facility.

We've said now that, you know, we expect Garden City to turn on by the 23rd of October, and that will ramp through November. You'll have those for November, and then you'll have some, you know, of the AI cloud business, several clusters running for the quarter there. We've said, I think, and David had the numbers up here, but on the AI cloud, we assume on the H100, per cluster, so the 1,024, so just think per 1,000 GPUs, about $1.5 million of revenue per month. EBITDA on the AI cloud business is... Let's just call it roughly 80%, is the EBITDA.

But that, that's again, why David called out specifically earlier, you know, that he was calling for an EBIT margin or an op, op margin, because we, we point to that because that, you know, I think that depreciation is real. But when we're guiding for an EBITDA number for the year, that's what matters. And so then if, you know, if you're looking out into February, you get, you know, essentially from December one to the end of February, or sorry, we have weird quarters, but, but you have $75 million of blockchain, that the 300 divided by four, and the same kind of for the May quarter. And then we've talked about, or I've talked about publicly, 26,000 GPUs online in the April timeframe. So if you step those up...

But let's just go to, you know, what a full quarter of those combined look like. So 26,000 GPUs in a single quarter. So $1.5 million times, you know, 26 times. So that's $39 million a month, times 3 is $117 million for a quarter, plus $75 million from the blockchain hosting. That gets you to a little over 190, I think. 192, is that right? I'm trying to do the math in my head. But that's roughly where that is. And so then you get, this is on a quarterly basis.

Those would go to, so $25 million of EBITDA from the blockchain data centers, and then, you know, 117, you know, times 0.8, would be your EBITDA from the AI cloud business. So you see those numbers get pretty large pretty quickly, as we roll that out. So then the CapEx portion. The CapEx portion you should think of for the clusters, every cluster running, you know, let's call it roughly, $40 million, per. We get significant prepayments from our customers. So, our primary customer, we get a little over a 60% prepayment. This has been disclosed in our public filings. So, you know, we get $22.5 million per cluster that goes out.

And then we've been so far successful with vendor financing, and we're hopeful that can scale significantly. And then we've also engaged a bulge bracket investment bank that's working on a GPU debt structure specifically for us, something similar to what we saw with CoreWeave. Right now, we feel comfortable about financing those. We've done pretty well so far. We'll see how far that scales. But the debt financing takes care of the vast majority of the financing needs. And then whenever the debt financing doesn't take care of, we get the prepayments from the customers to satisfy the rest of that.

And so just to talk about the contracts that we're signing, I've talked about this before, you know, this is—I mean, we're, it's gonna ramp really quickly, but I call it kind of, you know, trying to go into this market in the safest way that we can, which is Take or Pay contracts from our customers that generally pay for the expense of the GPUs, you know, over the life of the contract. So we're getting paid back, you know, kind of 24-28 months. You know, and the Character contract is 24 months. So, you know, maybe we don't get fully paid back, but we get, you know, 90%+ of that paid back to us. So, we're trying to match all of these appropriately, from a CapEx perspective and risk perspective to the company.

So that's why we're matching the take-or-pay contracts. And then also the types of customers that we really seek out are customers that, you know, I like them to have, you know, obviously, you know, good investors financing them. A product in the market already is always nice, with a big user base. And so, I'm getting kind of in the weeds here, but when you think about, you know, the race that people are running to try to win here, why do you want the people that have a lot of customers? So if you get the best model in the market, whether that's an LLM, whether it's image, you know, no matter what it is, you get the most users.

With the most users, you get the most data, and then you get to train the next version of the model, which is even better. So it's kind of this virtuous cycle if you can catch on the cycle. That's why I prefer to have those types of customers, and that's mostly who we've attracted. So, did I answer? I think I answered everything and more.

George Sutton
Senior Research Analyst, Craig-Hallum Capital Group

Uh, George.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

There you go.

George Sutton
Senior Research Analyst, Craig-Hallum Capital Group

George Sutton, Craig-Hallum. So Jared, I thought, did a great job of answering one of the misnomers in the market, which is that the hyperscalers will just go off and build their own facilities and ultimately won't need you. So basically, if I heard correctly, they were planning to build two-thirds themselves. They're now looking at only one-third themselves, two-thirds going to folks like you. So I, I ask that in the context of the logic of a hyperscaler being your anchor customers in one or more of your near-term facilities, and then I'll hold for a follow-up.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah, I-- You know, Jarrett's an expert in the industry. I'd defer to him on, you know, what the switch back to only doing one-third and two-thirds. I think I will go back with my own opinion that power is the biggest constraint. Near-term power availability is the biggest constraint in the industry. You know, we have kicked off a formal process for the anchor tenant for our North Dakota facility. We kicked that off in mid-September and hope to wrap that up fairly soon. I think, you know, those types of customers you're talking about are the most likely for us to be our anchor tenant at that site. You know, we had a slide. Rich has a slide.

We can-- Again, remind me to give this to you later, about, you're seeing these hyperscale customers. If you see their data center locations, they already have been moving away from kind of these cloud regions, you know, to where they can find power. And so, you know, we're seeing, we're seeing a lot of interest in our North Dakota site, and hopefully, we can wrap that up.

George Sutton
Senior Research Analyst, Craig-Hallum Capital Group

Okay, one other question. Looking out, say, 24-36 months, what will this company look like from the perspective of you've got a Sai Computing business, which will look largely separate or separable from your data center business, which could become a REIT or obviously become part of a REIT? Can you just give us longer-term thoughts there?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. You know, my thought there is, you know, we made Sai a wholly owned subsidiary for a reason. The contracts for cloud go into Sai. The idea is that Sai Computing is an AI cloud services company, and then Applied Digital is a data center company. And if I think through our timing, you know, build out 300+ MW over the next 24-36 months, I think at that point, you know, Applied Digital has the scale to, you know, REIT on the data center side and spin the Sai Computing side out, which is kind of why we've planned for that. But yeah, you're completely right on them being, you know, two separate businesses, one naturally being a customer of the other.

Rob Brown
Senior Research Analyst, Lake Street Capital Markets

Thanks, Wes. Rob Brown at Lake Street. I just want to get your opinion on pricing in the industry, how you see it changing. Are you pricing this demand environment into your contracts, how so in price your-

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Pricing in just data center in general is very strong, and, you know, pricing continues to move up. You know, the goal for us is to be building, you know, a... It would be fantastic if we could get Tier III pricing for the style of build we're doing in North Dakota, and I think this market maybe helps us get there. But that would, you know, for me, kind of be the dream. But we'll see when we go through this process, if that's where we get to. So if you think about kind of Tier III, data center pricing, this is the way data center prices, which is important because it's different than kind of what we've done on Bitcoin or, you know, you're doing the hourly on GPUs.

You do a price, but somewhere in the neighborhood of, say, $120-$150 of monthly rent, $120-$150 of monthly rent per kilowatt. You know, that means per megawatt, that's $120,000-$150,000 of monthly rent. Then you do a pass-through of power and some pass-through of data charges. You don't up-charge the power, you know, kind of like we do on Bitcoin mining. That's how you should think about that in a model. You know, if our facility at 100 megawatts is-...

Say we're at the low end of that, you know, $120, you can just do, you know, $120,000 times 100 for the monthly revenue. You'll have additional revenue on, like, the power pass-through that is, you know, it's lower margin or zero margin, but you still end up kind of shaking out to, like, that, you know, 50% EBITDA margin. Mm-hmm.

Rob Brown
Senior Research Analyst, Lake Street Capital Markets

With the pivot a little bit to liquid cooling from the alternative prior, is there just any increase in CapEx there we should be aware of, and how would that get passed on to customers?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. So there is some increase in CapEx, which, you know, we talked... So there's, there's-- we talked last call about kind of the CapEx going from this $4.5-$6. That's two pieces. That's, you know, building vertically, and then it's added, primarily adding the liquid cool component to it. And so the upside of that is that, you know, we expect to be able to run much higher densities. And then also, you know, it makes-- So if we can run higher densities, provide liquid cooling, it makes more of a scarce asset in the marketplace, my opinion. And then we also run at a lower PUE. Brad, correct me if I'm wrong. Liquid should get us to a lower PUE?

Brad Barton
EVP of Real Estate Development, Applied Digital Corporation

Yeah.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah.

Rob Brown
Senior Research Analyst, Lake Street Capital Markets

Are you seeing any kind of move in the market towards lower service, lower cost data centers, simply to attract the buying capacity?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

So I, I think what we're seeing is the, you know, people are much more open to going to regions like North Dakota for multiple reasons. There's a capacity issue in general that I think is gonna get much worse. But the workloads that we're doing, it's just, it's wildly different, right? It's just... It's not video streaming, it's not Zoom calls, it's not TikTok, it's not mission-critical apps. It's much more compute-driven rather than comms-driven, right? The last, you know, 25 years have been completely comms-driven, in my opinion, from a data center perspective, right? It's been all these other apps, you know, primarily video, that drives it. So this is extremely compute-driven, and so you don't need that latency. So people are figuring that part out.

Plus, there's just already, you know, a very limited supply, and so people are gonna be forced into getting supply where it's available. Those two items. I will say one other thing about this, though. You have to be careful about. I've said this many times publicly: We can't, nor do I think can other Bitcoin miners, convert their facilities into HPC facilities. So back to the low cost. Like, we're down to, you know, lower cost, but I think there's a limit that people are willing to put, you know, $250,000 servers, you know, in, you know, into. I know I would never put them into my Bitcoin facilities. And I think we've built pretty good ones, actually, but I still would never put them in there.

There's definitely a spread. So think about $6 million on the megawatt build. Brad, Tier III is $10-$12 million, typically. Is that right? $8-$10? So it's still, you know, drop down. Big difference there. Backup generators, the diesel backup generators on sites, we're putting some in to run mechanical and a few kind of mission-critical things, but not for the full, you know, 300 MW that we build out. Those are fairly expensive. There will be the ability to add them later if we have a customer that absolutely requires it and is willing to pay for it, but we're not doing them initially.

Brad Barton
EVP of Real Estate Development, Applied Digital Corporation

I've only seen a shift in customer expectations a lot. You know, they're not really looking at the generation backup.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

So-

Mike Grondahl
Head of Equities/Director of Research and Senior Research Analyst, Northland Securities

It's Mike with Goldman. Character. AI is obviously gonna be a really big customer, going from 1,000 GPUs right now to 16,000. Can you just talk about the pacing there, and maybe when a second customer or customer number two is gonna be getting some of these other GPUs?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. Second customer, November, on the GPUs. Let me kind of split this when we talk about, you know, large customers and Character. And this is important for what we're doing in North Dakota, too. So, so we have the things that we're ramping up, and it's, you know, it's-- it'll be Jamestown. We'll run 5,000 of these, and then we have Denver, Salt Lake City, Las Vegas, Minnesota, right? So we've got these third-party colos that we've spooled up. So they'll be spread out. And then, you know, the way we view the cloud business is when you're looking out into 2024, as we get this capacity online in Ellendale or in North Dakota, the-- we have demand from several companies for very large training clusters.

The capacity, the electricity capacity, the data center capacity doesn't exist really for these. So when you think about, you know, someone that wants to do, you know, 22,000 H100 GPUs in a single cluster, and they're gonna need close to 40 MW of power around that, you know, 30-meter radius that we were talking about earlier, those are the types of things that are perfect for Ellendale. And that will be capacity specifically for that.... And when we put those large training clusters in Ellendale, the smaller training clusters that are more in cloud regions around the country will be, you know, really kind of perfectly positioned for inferencing, if that makes sense.

So I think inferencing, there's many different versions of what inferencing will look like, but I think those will be, you know, start to move towards the inferencing part of the market because they're based in those cloud regions. So I want to, on that kind of 16,000 comment, just bucket that into, you know, very large training models, specifically in the Ellendale location. But our second customer will get capacity in November.

Mike Grondahl
Head of Equities/Director of Research and Senior Research Analyst, Northland Securities

Roughly about how many thousand?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

So we'll see how the schedule goes. So we're getting. You know, we got delivery in September, we'll get delivery in October. We expect significant deliveries November and December. And if the scheduling goes as we've been, the deliveries go as we've been given the schedule for, we should get, you know, roughly 20,000 GPUs by the end of December of this calendar year. So it'll start to fill up a lot of those other contracts that we've signed.

Mike Grondahl
Head of Equities/Director of Research and Senior Research Analyst, Northland Securities

Thank you.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yep. Jonathan, go ahead, Jonathan.

Jonathan Lee
Technology and Digital Infrastructure Equity Analyst, Guggenheim Securities LLC

When we think about NVIDIA and how they allocate GPUs, are they looking down as far as... you say your undervalued asset is the power contracts. Are they looking that far and saying, "Oh, there's actual power contracts here. We know these GPUs can be deployed?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. So we had to do that on the first one too, right? We have to show them where, the kind of where we're deploying. You know, maybe what the specs are. Wasn't involved specifically in that conversation. But we have to show them where we're deploying these, because it's important that there not be you know, a secondary market created, right? Where people are just buying and flipping because you know, that-- it still exists, you know, despite... And you know, I-- NVIDIA has their own strategy around this.

I don't know what that is, but obviously they've, you know, been through this before with a lot of the gaming GPUs on the crypto cycle, where they couldn't—you know, they were getting all sucked into the Ethereum mining market, and they couldn't get them, you know, for actual PCs for gaming. So, you know, I think they're trying to control that really tightly, and you do have to show the capacity to plug these in if you want delivery. Or at least we do. Maybe someone else gets treated differently.

Speaker 20

You were early getting the first 300 MW of power lined up, but I would imagine there's lots more competition now. What does it look like in terms of your ability to lock down the 1.1 or 1.2 GW, and how does pricing look, et cetera?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. So pricing is a little bit higher than what we see, you know, on, say, for example, our North Dakota site, but still really attractive for this space. You know, we've done this many times, so we're in the, you know, in the process of, power studies, right? So all the utilities, generally, when you start this, they have to go through the process of power studies and make sure that they can deliver that. And there's, you know, depending on which utility it is and which kind of ecosystem is sitting in, you know, ERCOT or MISO or whatever it might be, they have to go through their studies. So we're in that process. But some of those look, you know, really promising. I think one of the sites is around 0.5 gigawatt, site by itself.

So we're definitely gonna keep working on the pipeline. I will say this, though, 300 MW that we already have to work on is definitely gonna keep... You know, we're gonna have our hands full for, for the next, you know, 18 or 24 months working on those.

Jonathan Lee
Technology and Digital Infrastructure Equity Analyst, Guggenheim Securities LLC

On your financing, two things. First, what's the theoretical cap on vendor financing? Let's say you get to that 20,000 by the end of the year, can you fully finance that? And then on the potential collateralization, do you actually have to have physical delivery of those chips before you can collateralize the asset?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

David, I'll let you answer that.

David Rench
CFO, Applied Digital Corporation

Yeah. We have high confidence we're able to finance the clusters we have on order, again, you know, through finance companies, OEMs, and then the Wells Fargo bracket bank, you know, that we're working with to accomplish that. Typically, yes, you know, when they're delivered is when we take ownership and securitize.

Jonathan Lee
Technology and Digital Infrastructure Equity Analyst, Guggenheim Securities LLC

Maybe one more. On Ellendale, at what point with energy capacity do you have to put a new transformer in? Like, what's the theoretical cap?

David Rench
CFO, Applied Digital Corporation

We've actually already purchased that transformer, and prepared for that expansion, because we do have to expand the substation there.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. Even this, the current, the next build. So we have an ESA for 225, but the... It does require a transformer that we luckily found one available because the lead time is, like, almost a year and a half, something like that, for those kind of transformers. So, that was a big win for us.

Nick Giles
Senior Research Analyst, B. Riley Securities

Hey, Wes, Nick Giles from B. Riley. How would you outline the typical checklist for GPU deliveries, whether that's, documentation, financing? Can you kind of walk through that timeline?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

You mean just the things that we typically have to do to get delivery?

Nick Giles
Senior Research Analyst, B. Riley Securities

Yeah, from delivery, and then just kind of from, you know, initial order to plug-in.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. So, you know, it depends, but, but what our experience has been generally is that... We, you know, we have our customer, you know, we have-- now we have a lot of orders placed with, with Supermicro and, and HP and Dell. And so when we have our customer, you know, generally we'll tell, you know, NVIDIA, which customer this is being allocated to, and then, you know, that customer will confirm it. We'll say which OEM that we're using, and then we'll get, you know, generally a scheduled delivery date, for that. And then we have to, you know, again, show them where, where these are going to be delivered and the, the data center capacity that we have to turn them on. Those are typically the boxes that we have to check to, to get delivery.

David Rench
CFO, Applied Digital Corporation

There is a period of racking and cabling, obviously, to get these large clusters just stood up once we receive at the actual data center.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

I would say, in general, you know, the GPU delivery seems to have gotten better. InfiniBand was much more difficult than the GPUs, and that seems to be getting better as well, on the InfiniBand side. So, the supply chain overall there seems to be getting better, at least to us, it seems like that.

George Sutton
Senior Research Analyst, Craig-Hallum Capital Group

As a lucky resident of Minnesota, I know it's about to get cold. I happen to know North Dakota gets cold about the same time, if not earlier. I wondered if you could be real specific as to when you're going to need to have the footings in, and assuming you do get the footings in in time, as you're talking to customers about when you're going to potentially have the ability to light up this new facility, what kind of time frame are you telling them?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Brad, do you want to help on the footings? I know that it needs to be fairly soon, but the goal here is kind of first power in the building in, like, the late April, early May time frame. But, Brad, maybe any specifics you want to give on just foundation pours and weather?

Brad Barton
EVP of Real Estate Development, Applied Digital Corporation

Sure. Winter conditions are definitely something we take into account. We've been fortunate enough to build through two winters of North Dakota already. It's been definitely a challenge, and we're making the accommodations to start within the next few weeks. Just finished our geotech, got our last boring samples last week, and our finalized foundation design is due anytime this week. So GC is mobilizing shortly.

George Sutton
Senior Research Analyst, Craig-Hallum Capital Group

Actually, one other quick question, if I could. So we're increasingly seeing financings happening that are sort of including different chipsets as part of the financing. So I won't use NVIDIA. If I'm Anthropic, I'm going to use Amazon, if I'm... Et cetera. So can you talk about your willingness and ability to migrate to those other chipsets as that happens?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. So we can do that if our customer has that requirement. You know, we could do Intel, we could do AMD. Obviously, I don't think we're getting our hands on Trainium or Inferentia, which are the Amazon chips. I don't think they're selling those externally. But any of those other vendors, in theory, we could do. The vast majority of the demand we still see is around NVIDIA, whether that's, you know, still H100. There's a lot of interest in the Grace Hopper Superchip, but we're not seeing a lot of supply there, and I don't know when that's gonna change.

And then as Jason mentioned in his presentation, you know, we're working with a customer on a smaller deployment that could turn into a larger deployment of the L40Ss, which are really around inference. It's kind of like an A100 plus around inference. But the answer is, George, we could. We could do that.

Speaker 21

You talked a little bit about the anchor tenant that you're working on. What are the gating items or steps that you need to complete, and any sense of timing on when that contract gets nailed down?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Go ahead, David.

David Rench
CFO, Applied Digital Corporation

For us, it was really getting the design finalized so that they could understand the value that we're adding there, and that I wrapped up, you know, a month or so ago. So, you know, we kicked off the formal process, you know, mid-September, and it's just a formal process.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah.

Speaker 22

Thanks, guys, for the question. So I think with the Bitcoin miners, you had them on Marathon's balance sheet. It wasn't actually you owning the miners itself, right? So what kind of changed here with how you guys are approaching HPC and actually purchasing the GPUs? And also, of the 34,000, what's the kind of split up between the H100s, A100s, L40s?

David Rench
CFO, Applied Digital Corporation

The 34,000 is all H100s today. And, you know, for the HPC buildings, the anchor tenant, we would not own any of the servers or hardware. They would bring it in, and it would just be colocation.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

You're right. We are owning the GPUs on the cloud side. You know, I think we just see the opportunity. We, you know, Jason even said it, and I said it earlier, you know, we did a lot of the work to kind of put ourselves in this position, but then there was, you know, this luck piece that, you know, we were working for what we thought was a much smaller market. When we see this market develop and we're one of the first people there that have the physical infrastructure to do it, we'd already put, you know, software tools in place to do it. We're running small customers on that, and we could just, you know, lean into that market.

I think the big difference for me, as one of the large owners of the company, is... You know, I think the durability of this market versus, I'm not saying that Bitcoin is not a durable market, but you know, the volatility in the Bitcoin market, the inability to you know, have the pricing control, you know, you kind of have this unique feature of the Bitcoin market, I would say. So this market, you know, looking to who's involved, the level of involvement, the potential, and kind of being in very early, I think it's much more attractive to us to own GPUs and run that cloud service, given our position, than it was for us in the Bitcoin market.

Speaker 22

You talk about your power, cost, capacity advantages. So how does that relate to the actual end price you, you'd be able to offer the customer through Sai? Like, you know, I, I believe Lambda Labs is at, like, $2 a GPU hour or something like that, maybe a little lower if you go on the enterprise scale. But I guess, how do you stay cost competitive with that, and do you-- do the power advantages help you kind of undercut some of those guys?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Sure. So, so I think that over time, one of the biggest advantages for us is purpose-built. That word was said a lot today and intentionally. Purpose-built, low-cost digital infrastructure for these types of workloads. So it's not repurposing, you know, kind of Swiss Army knife data centers that were built for a lot of web servers. It's purpose-built, high power density, so you can put, you know, whether it's training or inference or whatever it is, these are purpose-built. And then the power cost, I think, is important. So that was what we really focused on for Bitcoin mining because that was super important. High power consumption means that, you know, power ends up being one of your biggest operating costs, and that's not changing here, right? We're talking about the same, if not more, types of power consumption.

And so for us to be, you know, sitting in North Dakota at, you know, $0.03-$0.035/kWh versus, you know, some of the colo that we see, that's, you know, can be, you know, if you're in California, it can be $0.15, right? Or, we see a lot of $0.10, you know, colo costs. So you're, you know, there's no margin made for the colo provider, but the expense on that, it gets pretty significant when you're consuming the amount of power that these applications consume. So one of the big advantages for us over time is building this purpose-built infrastructure, low cost, to keep our cost structure, you know, the lowest or one of the lowest in the industry, to be able to compete in the space.

Right now, the advantage is gonna be, we have the capacity, but the good news is, when you go, you know, further into the future, it's gonna be, it's the right type of capacity with the right kind of cost.

Speaker 22

So release any information on what that pricing might look like, or is that in the future?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

I think I just gave... We've given all of that pricing. So, like, our—so if we're running our own cloud, and if we're paying, you know, call it, you know, $0.11-$0.12/kWh-

Speaker 22

I mean, like, how much would it cost per GPU hour for initial?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah, we haven't given that, that specific number. Yep.

Speaker 23

In the Bitcoin business, originally, they were mined by GPUs, and then that was taken over by much cheaper, more efficient ASICs. To what extent is that feasible or relevant in the AI business, that ASICs could take over from the GPUs?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah, there's-- I think there's companies working on that now, but we haven't seen anything. You know, we watch these, and our customers haven't seen anything. So I think if that ever happens, it's kind of a long ways down. But for us, you know, the risk for us would be-- So on Bitcoin, you know, the ASICs became more efficient, but you're still really, you know, high power consumption. So the data center piece would probably stay very similar. It would be, you know, what is the value if we own a lot of these GPUs? You know, what happens to the value of the GPUs if there's something much more efficient?

That's definitely always going to be a risk, but I don't see anything that close in the market right now.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Any more questions?

Nick Giles
Senior Research Analyst, B. Riley Securities

Hey, Wes, you've spoken about contracting 70% of your capacity, and can you just talk a little bit more about that 30%? Would this be short-term, smaller customers, or would you prefer kind of a larger cluster?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Oh, you mean if we're running our own cloud service in that?

Nick Giles
Senior Research Analyst, B. Riley Securities

Yeah.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. So the way I view our cloud business over time, so we have, you know, we have these large customers. We've announced some of them. We talked about new ones on the quarter and what kind of the annual contract value is. Those are great. Those are take or pay contracts that pay us back for the GPUs. We have our on-demand, you know, service that's running and it's small, but it's started to ramp up as well. So that's, you know, kind of the north of 100,000 in July and kind of moving up. But the way I see that over time is the on-demand portion of the business needs to get significantly larger, more diverse customer base, higher pricing, higher margin for us.

And the way, kind of the way you do that is, you know, we get a lot of this reserve capacity, and then, you know, there's gonna be some slack. So when we have 26,000 GPUs online, we'll have some slack, and then with our customers, we can go give them a, you know, partial credit back on their GPU for the hour, and then we can resell that in on demand. And so if you can have, you know, 15% of your GPU pool, you're gonna have a pretty big available amount of GPUs to run an on-demand service. And so that's kind of the piece that we're structuring and is already kind of ramping up for us. But, you know, three-...

years from now, I'd hope that's, you know, like 50% of the business versus just, you know, running these reserve contracts.

Nick Giles
Senior Research Analyst, B. Riley Securities

It seems like some on-demand capacity can be almost double the pricing of reserve. Would you say that's still the case today?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. It depends. We-- I think the more fair price is probably in the, you know, in the 35%-45% premium to reserve. With a caveat, depends on the length of the reserve contract, right? They're not all... It's apples and oranges on the reserve contracts because five-year contracts are priced much lower than a six-month reserve contract, right? You see it significantly below. I would say versus kind of our average contract, I would think like 35%-45% premium on the on-demand.

Nick Giles
Senior Research Analyst, B. Riley Securities

Excellent.

Speaker 19

I guess, how do you guys see the ROI in terms of power leading up to ROI for either the Bitcoin mining or the... like, revenue per megawatt for the crypto space versus the HPC space?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. David had that slide that I don't know if you were here earlier for. We actually did the breakdown on a kind of per megawatt basis, but it goes like this: megawatt on Bitcoin was $625,000 per megawatt in revenue, and then HPC is $2 million on colo per megawatt. The GPU, the AI cloud piece, is about $12 million a megawatt.

Speaker 19

Great.

David Rench
CFO, Applied Digital Corporation

Those slides will be or are available currently on our website now.

Speaker 19

Good to hear. And then, so with Marathon having the 309 capacity contracted, how does that kind of leave room for, I guess, the HPC space? Like, what's kind of the breakdown of the rest of that?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah. So we have a total for blockchain data centers of just under, it's like 486 MW. And then besides that, we have a 9-MW HPC facility that'll be finished by the end of this year in Jamestown. That was our build number 1. We have another ESA, electricity service agreement, so PPA in North Dakota for 225 MW. So that's new build. That's the one we're talking about, you know, moving dirt and getting going here. And we have 100 MW just north of Salt Lake City for power as well. So that's-- So we want to separate that, you know, call it 500 for blockchain, and then we have a little over 300 for HPC data center builds in the pipeline.

Contracted in the pipeline.

David Rench
CFO, Applied Digital Corporation

We have a pipeline above that.

Speaker 19

Okay.

Speaker 21

Well, what are your aspirations or limitations?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

I think right now, the limitation has just been the GPU delivery. We're able to go out and get the data center capacity for that, and I think, you know, the GPU delivery portion is, you know, that bottleneck is starting to be solved, and we're starting to see those deliveries come through. So I think it's just been delivery at this point. And as we start to see that get better through, you know, November, December, January, we're gonna see a really steep ramp of that revenue and the income stream from that. So that's been the primary one. Demand hasn't been the issue for Sai, it's more that.

Now, as we get through the GPU bottleneck, the next bottleneck is, as I've been saying here, is data center capacity. So, you know, for us, it's we have locked in what we have locked in now, and then as we look out, you know, to the first half of next year and throughout the calendar year of 2024, I think we need our own capacity to come online to, you know, to kind of feed that growth for Sai Computing. Yeah, so it's, it depends, right? So we've talked about trying to keep 30%. I think we have customers for kind of the gen AI customers for large training models that probably want closer to like 60% of that building number one right now.

I have to-- there's a trade-off because I can't finance the construction with those customers versus kind of the anchor tenant customer. You know, there's gonna be some constraint there. I think we need to get to that 70% number to get the, you know, the bank debt financing to, you know, the construction loan that then flips into the ABS at the end. You know, there's definitely some puts and takes in that. No, you-- so you should think of it this way. I wanna be clear because it gets a little bit confusing. On the way we're looking at 300 megawatts, you get, you know, anchor tenant, you get construction financing, you know, 70%-80% of loan to cost.

There's a piece that in the industry is, you know, generally called like a, the equity check. When we're in public markets, think equity check being equity. Think of this as at the site level, think of it more like mezz debt. Think of it as being kind of a high teens rate of return capital, first money out, that then is left with a small, a small equity, say, you know, 3%-5% equity piece at that site, the site level. That's the way you should be thinking through that. It's not that full 20%-30% that, you know, is what we talked about is kind of the equity portion of it.

George Sutton
Senior Research Analyst, Craig-Hallum Capital Group

Is it possible to get to the 70% in your first facility in the North Dakota facility, that you might have a couple of anchors, not just one?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah

George Sutton
Senior Research Analyst, Craig-Hallum Capital Group

-specific anchor? And as you are going through this process, where is Salt Lake City coming into the mix from a timing and opportunity of negotiations? Are they separate, completely separate negotiations, or are they somewhat integrated with the same types of customers?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

It'll be the similar customers. Right now, we're focused on the North Dakota facility. In my mind, the way these get built are 100 in North Dakota, 100 in Salt Lake City, 100 in North Dakota, kind of for that 300 megawatts. That marketing process hasn't started yet for Salt Lake City. We're focused on North Dakota right now.

David Rench
CFO, Applied Digital Corporation

But similar customers that would be-

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah.

David Rench
CFO, Applied Digital Corporation

-marketed to.

Jonathan Lee
Technology and Digital Infrastructure Equity Analyst, Guggenheim Securities LLC

You and Jason mentioned the Grace Hopper.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yep.

Jonathan Lee
Technology and Digital Infrastructure Equity Analyst, Guggenheim Securities LLC

In terms of NVIDIA's build-out of QPUs, how does that look, compared to H100 and the current state for, GPU ecosystem? You mentioned, you see demand but not supply. Is NVIDIA purposefully limiting supply of, Grace Hoppers to not cannibalize sales of H100s? So theoretically, if you had, like, the same supply of Grace Hoppers and H100s, what would be your perspective on-

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah.

Jonathan Lee
Technology and Digital Infrastructure Equity Analyst, Guggenheim Securities LLC

-purchase?

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Where's Mike? Mike, so this is my understanding. The GH200, you know, super chip, it's more about you're using the same GPU. It's more about on-board memory and processing, right? So you have the Arm processor directly on the GPU with memory. And so you're actually... I don't wanna say for NVIDIA, I don't wanna speak for them, but I think you're using the same, you know, H100 GPU chip, just a little bit different architecture on the board. Is that right, Mike?

Mike Maniscalco
CTO, Applied Digital Corporation

Yes, I think it's still everything off of the market.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yeah.

Mike Maniscalco
CTO, Applied Digital Corporation

Since the GH200 architecture not available on the market yet. So I think a lot of the companies understand the performance perspective or capabilities of the H100 right now, and they're very comfortable with that. They're waiting to get their hands on the GH200 to benchmark it and see how it's actually gonna play out. I think that's one of the biggest limitations right now.

Wes Cummins
Chairman and CEO, Applied Digital Corporation

Yep. All right. Erin, you want to wrap up?

Thank you all very much.

Yeah, thanks, everyone, for coming.

Erin Kraxberger
VP of Government Relations & Public Affairs, Applied Digital Corporation

Thank you, everyone. Incredible morning. Equally incredible, I've walked up those steps 15 times and haven't managed to fall on my face. We want to express our appreciation for your attendance, questions, and enthusiasm throughout the day. We certainly hope you're seeing the promising outlook that's here at Applied. With your continued support, we will continue to redefine what we think is achievable in the world of digital infrastructure. Thank you for joining us. We do have box lunches outside, so please feel free to grab one if you're hungry. We also have a camera crew set up. If you'd be willing to share any thoughts or feedback, we'd love to collect that on camera. Thank you again. Have a good day.

Powered by