IREN Limited (IREN)
NASDAQ: IREN · Real-Time Price · USD
61.20
+4.35 (7.65%)
At close: May 8, 2026, 4:00 PM EDT
60.76
-0.44 (-0.72%)
After-hours: May 8, 2026, 7:59 PM EDT
← View all transcripts

Earnings Call: Q3 2026

May 7, 2026

Operator

Day, thank you for standing by. Welcome to IREN Q3 FY 2026 results. At this time, all participant are in listen-only mode. After the speaker's presentation, there'll be a question and answer session. To ask a question during the session, you need to press star 11 on your telephone keypad. You will then hear an automated message advising your hand is raised. To withdraw your question, please press star 11 again. Please advise that today's conference is being recorded. I will now like to hand the conference over to our first speaker today, Mike Power, Vice President of Investor Relations. Please go ahead.

Mike Power
VP of Investor Relations, IREN

Thank you, operator. Good afternoon, and welcome to IREN's Q3 FY 2026 results presentation, and thank you for your patience as we get assembled. I'm Mike Power, VP of Investor Relations, and with me on the call today are Daniel Roberts, co-founder and co-CEO; Anthony Lewis, CFO; and Kent Draper, Chief Commercial Officer. Before we begin, please note that this call is being webcast live with an accompanying presentation. For those dialed in by phone, you can elect to ask a question through the moderator after our prepared remarks. I would like to remind everyone that certain statements made during this call may constitute forward-looking statements. Those statements are based on current expectations and assumptions and are subject to risks and uncertainties that could cause actual results to differ materially. Please refer to slide 2 of the accompanying presentation and SEC filings for more information.

During today's call, we will also refer to certain non-GAAP financial measures. As a reminder, a reconciliation to the most directly comparable GAAP measures is included at the end of the presentation. With that, I will turn the call over to Daniel Roberts.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Thanks, Mike, and thank you everyone for joining us today. eight years ago, when Will and I founded this business, we spent a lot of time thinking about what the digital future actually meant for the physical world. We talked about films like The Matrix and Ready Player One, not as science fiction, but as a signal. Worlds where digital adoption was total, instantaneous, and infinite. The insight we kept coming back to was this: digital adoption curves can go from zero to one overnight, but the real world doesn't scale that way. Power, infrastructure, land, data centers, these take years to permit, finance, and build. The bigger the demand, the harder delivery becomes. That gap between exponential digital growth and the physical world's ability to service it, that structural disconnect, is exactly what we set out to solve.

That scarcity is now defining where AI infrastructure gets built and who can build it. Eight years later, that thesis is playing out exactly, and this quarter we demonstrated what disciplined execution against it looks like at a global scale. In AI infrastructure, secured power is only valuable if it can be converted into customer-ready compute. That conversion is hard. It requires site control, grid connection work, permitting, design, procurement, construction, GPU installation, networking, commissioning, financing, and customer delivery, all coming together on tight timelines. IREN's strength is bringing those pieces together. We have experienced site teams, standardized designs, and repeatable construction processes that allow us to build across multiple sites in parallel. As we scale, each phase builds on the prior phase. The template becomes more repeatable, the procurement and construction process becomes more efficient, and the site teams carry that experience forward.

That is where IREN has built its moat and why real assets and real capabilities are harder to replicate than they might appear. That execution capability is showing up in the numbers. More capacity, more revenue, stronger funding certainty, and in the partnerships we are announcing today. This was a significant quarter and a significant week. Let me run through the highlights. On capacity, we increased secured power to 5 gigawatts, added new sites in Europe and APAC, energized Sweetwater One on schedule, and have Horizon One GPU commissioning now underway for Microsoft. On customers, all of our operational capacity is fully contracted. We are not chasing demand. We are racing to build supply fast enough to meet it. In this market, the moment compute comes online, it goes to work.

That is the nature of the structural imbalance between AI infrastructure supply and demand, and it is why time to compute is the most important metric we track. We increased ARR under contract to $3.1 billion, remain on track to hit $3.7 billion exiting calendar 2026, and this week signed a $3.4 billion five-year AI Cloud contract with NVIDIA, the first step in a broader strategic partnership I will come to in a moment. On capital, we had $2.6 billion of cash at April 30, and we continue to progress GPU, data center, and corporate-level financing initiatives to support the next phase of build-out. The headline today is the NVIDIA partnership, and it deserves a little more than a bullet point. Let me explain what this partnership actually means.

We are working with NVIDIA to support deployment of up to 5 gigawatts of NVIDIA DSX-aligned AI infrastructure across our global data center platform, alongside DGX environments and the DSX AI factory reference architecture. The $2.1 billion NVIDIA investment is structured to reflect that. Their rights to invest only vest as NVIDIA GPU infrastructure is deployed across our campuses, and only fully vest upon deployment of 600,000 GPUs. NVIDIA's capital is directly tied to execution. That's not a passive financial investment. NVIDIA is a partner who wins as we deliver. The $3.4 billion AI Cloud contract announced today supporting NVIDIA's own internal workloads is the first step in that partnership. 8 years ago, Will and I set out to build the infrastructure the digital world would need. Today, the world's leading AI infrastructure company has chosen IREN as the partner to help build it.

This next slide shows exactly how we build against this. Here is our plan. In 2026, we are targeting 480 MW of AI Cloud capacity, 150,000 GPUs, and $3.7 billion of ARR by year-end. That is the near-term plan and the clearest bridge from capacity to revenue. In 2027, we are scaling to 1,210 MW, with an additional 730 MW currently under construction across British Columbia and Texas, including Childress and the initial phase at Sweetwater One. The construction flywheel we are running in 2026 carries directly into this next phase. Beyond 2027, we are building against a 5 GW global power portfolio. North America, our new European platform in Spain, and an APAC pipeline anchored by large-scale Australian opportunities.

The sequence of delivery matters because it dictates time to compute, and time to compute is what drives revenue. Each phase supports the next. That's how the platform compounds. One more thing before we move on. This week, we welcome Mirantis into the IREN family. 650 engineers, operators, and customer support professionals who have spent more than a decade running cloud infrastructure for over 1,500 enterprise customers globally. To Alex and the whole Mirantis team, welcome. I'll come back to what this means for our delivery capability later on. Let me start with 2026, where construction and customer demand are coming together most visibly. The 2026 expansion is focused on delivering 480 MW of AI Cloud capacity across Childress, Prince George, and Mackenzie. This is where the roadmap translates into near-term deployments, customer handoffs, and ARR conversion.

We'll start with the largest and most complex 2026 work stream, the 300 megawatt Horizon 1-4 liquid-cooled deployment at Childress, where NVIDIA GB300 NVL72 installations are now underway. Horizon 1 is scheduled for Microsoft handoff in Q3, and Horizons 2-4 remain on track for delivery by the end of this year. This is a major execution milestone. It demonstrates our ability to design, build, fit out, and commission large-scale next-generation liquid-cooled infrastructure for a hyperscale customer on an accelerated schedule. We have around 3,000 workers on-site right now. That level of activity reflects both the urgency of AI infrastructure demand and also the depth of our execution capability on the ground. Importantly, the model is repeatable. Horizon 1 establishes the build template. Each subsequent phase benefits from the same design, supply chain, construction sequencing, and site team.

That is how we drive faster deployment phase every time. Alongside the liquid cool build, we are also converting existing air-cooled capacity into AI Cloud deployments across British Columbia and Childress. In British Columbia and Childress, we are progressing 180 MW of air-cooled AI Cloud capacity by leveraging existing infrastructure. At Prince George, all air-cooled GPUs have now been delivered and are either operating or undergoing commissioning across the 50 MW site. At Mackenzie, 80 MW of data center capacity has been prepared for GPU installations commencing in the second half of 2026. Finally, at Childress, data center retrofits are underway across an initial 50 MW ahead of GPU deliveries in the second half of this year. This is a capital efficient part of the roadmap. We are taking existing sites and converting them toward higher value AI Cloud workloads.

It works because we already have the operational teams, infrastructure, and site control in place. Air-cooled capacity can come online faster than liquid cooled. In a market where time to compute is everything, that speed is a commercial advantage, and we are using it. We're already seeing this dynamic play out commercially, with capacity continuing to be contracted ahead of commissioning as customers prioritize speed to market. We now have $3.1 billion of ARR under contract, including approximately $700 million of ARR associated with a $3.4 billion 5-year contract for Blackwell GPUs to be deployed across 60 MW of air-cooled capacity at Childress for NVIDIA. Against the full 2026 expansion, we are targeting $3.7 billion of ARR by year end across 150,000 GPUs.

The remaining uncontracted capacity represents approximately 50,000 air-cooled GPUs scheduled for delivery in phases through the second half of this year. Demand for that capacity is robust. Our focus is on using our time to compute advantage to secure the right customer mix. With the 2026 plan on track, let me turn to what comes next. The 2027 expansion, the platform rather, scales to 1,210 MW. The 2027 plan is about demonstrating what we are building in 2026 is not a one-off. It is a repeatable, scalable model that should accelerate over time. Here's what that looks like in practice. In British Columbia, Canal Flats is another example of converting existing infrastructure into AI Cloud capacity. We plan to retrofit all 30 MW of existing air-cooled capacity to support AI workloads.

Capital efficient, fast to execute, and consistent with the same model we are running at Prince George and Mackenzie. In parallel, Childress continues to be the largest single contributor to the 2027 setup, with both new liquids-cooled capacity and additional air-cooled retrofits adding a total of 400 megawatts of gross capacity. At Childress, the 2027 plan includes 100 megawatts of additional liquid-cooled IT load for Horizons five and six, as well as retrofitting an additional 250 megawatts of existing air-cooled capacity. Of that 250 megawatts, approximately 60 megawatts will be deployed to support the NVIDIA AI Cloud contract. The combination of new liquid-cooled data centers and air-cooled retrofits gives us real flexibility. We can support next generation high-density deployments while continuing to use existing infrastructure where it is the right technical and economic fit.

That flexibility is part of what makes Childress such a productive campus. In parallel, Sweetwater becomes the next major Texas campus in the 2027 plan. At Sweetwater One, the high voltage substation has been energized on schedule, and construction is now underway for the initial 200 megawatts IT load phase of liquid-cooled data centers. Energizing the substation is an important milestone. It moves Sweetwater from development into execution and establishes the electrical foundation for the broader site build-out. Sweetwater One is being designed for next generation chip architectures, including the NVIDIA Vera Rubin. Like Childress, we are deliberately sequencing the build so that the first phase creates the backbone for faster subsequent phases. The first 200 megawatts is not just the first 200 megawatts. It is the foundation for a much larger site.

The commercial pipeline for our 2027 capacity is anchored on the same principle that is driving everything we are building. Our vertical integration is a genuine advantage for customers because we control more of the critical path than anyone else in this market. Power, land, data center construction, the pieces that cause delays for others are the pieces we own and control. Customers want certainty the capacity will be available when promised. The phase 2027 build-out plan gives us a concrete basis for those conversations, and we are having them. We are in the process of negotiating large-scale AI Cloud deployments across our 2027 capacity today. Demand is not the constraint, however. It is highly unlikely to be the constraint. The priority is delivering capacity on schedule and converting our time to compute advantage into durable long-term customer relationships.

We do expect the customer mix to evolve over time. Hyperscalers, AI natives, enterprises, and on-demand use cases, but we do not need to force that outcome. The platform will attract the right customers as it continues to scale. Beyond 2027, the same execution model extends into a much larger 5 gigawatt global platform. We now have 5 gigawatts of secured power. To put that in context, that is not a pipeline number or an aspiration. That is secured power, and it represents one of the largest portfolios assembled for AI infrastructure anywhere in the world. The question now is how we build against it. The answer is a phased global platform across North America, Europe, and APAC, with additional development opportunities beyond that. Let me walk you through each region.

We'll start with North America, which remains the largest component of the long-term platform. In North America, the next major phase is driven by Sweetwater and Kiowa, our flagship gigawatt-scale campuses in Texas and Oklahoma, where data center capacity is expected to commence ramp in across 2027 and 2028. We also have multiple development projects advancing through the connection processes, including Batch Zero candidates in Texas, which represent some of the most strategic, valuable reconnection opportunities in the country. The North American pipeline has a natural progression of scale. Childress demonstrates the operating model today. Sweetwater expands it across an even larger campus, and Kiowa provides the path to another hyperscale tier opportunity as power ramps from 2028. Every campus builds on the last. That's the compounding effect of having secured the right land and power positions early.

At the same time, we're expanding the platform into Europe through Spain. Today, we announced the acquisition of Nostrum Group and with it our entry into Europe. The transaction adds 490 MW of secured power in Spain, a GW scale development pipeline, and a team of more than 50 people across development, engineering, construction, and operations. What it really adds is a platform and the right people to build it. I want to acknowledge Gabriel Nebreda and the Nostrum team. Gabriel spent nearly 2 decades in European energy at EDP Renewables, managing GW of operating assets across multiple European markets, and most recently, as CEO of EDP Solar. He understands European power infrastructure as well as anyone, and we are excited to have him leading IREN's European platform. Spain is the right place to start.

Supportive AI policy, abundant renewables, lower build costs, and strong connectivity into broader European demand. Europe is a market where power availability and grid timelines are increasingly shaping where customers can actually deploy. Spain gives us a credible, scalable answer to that question. This is not just a power acquisition, it's the establishment of IREN's European platform. From Europe, we move to the other side of the world and an opportunity that matches the scale of everything we've just described. Australia is obviously not a new idea for us. We have been progressing large scale Australian projects towards secured grid access for some time, and we think the opportunity here is as significant as anywhere in our portfolio. This is why. Asia Pacific is home to roughly 4.8 billion people, around 60% of the world's population.

That includes some of the fastest-growing AI demand markets on Earth. Indonesia, Singapore, Japan, Korea. The infrastructure requirement to service that demand is enormous, and it is largely unmet. Australia is uniquely positioned to serve it. Abundant renewables, a trusted jurisdiction, strong rule of law, and as the submarine connectivity map shows, direct fiber links into major demand centers across the region. It is the natural anchor point for AI infrastructure service in APAC. We are already seeing hyperscalers and frontier labs make significant commitments to Australian operations, and we intend to be a major part of that story. Beyond Australia, we continue to progress global development opportunities that extend IREN's runway further still. The platform we are building is designed to create scale into demand wherever it develops, and the pipeline gives us the flexibility to do exactly that.

That is the global platform, secured power across North America, Europe, and a development pipeline extending into APAC and beyond. Securing power and building data centers is only 1 part of the equation. The other part is what happens when the compute goes live, how it is deployed, managed, and supported for customers at scale. That is where I'd like to spend a moment on Mirantis. This week, we welcome Mirantis into the IREN family, and I want to take a moment to acknowledge that. 650 people joined IREN this week. Engineers, operators, customer support professionals. A team that has spent more than 1 decade building and running cloud infrastructure for over 1,500 enterprise customers globally. That track record speaks for itself. What they bring is specific. Their k0rdent AI platform manages AI infrastructure across bare metal, virtual machines, and Kubernetes environments.

Exactly the complexity our customers are dealing with as deployments scale. They are also a founding ISV partner of the NVIDIA AI Cloud Ready Initiative, which means they are already deeply embedded in the same ecosystem we are building into. As we scale, delivery is not just about bringing GPUs online. It is about what happens after. Provisioning, monitoring, supporting customers through increasingly complex environments. Mirantis strengthens all of that. We are already seeing it, and they will play a central role in supporting our NVIDIA AI Cloud contract. To Alex and the whole Mirantis team, a big welcome. We're super excited to have you. What you have heard today is a company that has secured power at scale. Is contracting revenue at scale, and is now building delivery capability at global scale. Anthony will now walk you through how we are funding it.

Anthony Lewis
CFO, IREN

Thanks, Dan. The capital strategy is designed to support the phased build-out of capacity Dan discussed, while maintaining flexibility and capital discipline. As of April 30, we had $2.6 billion in cash and cash equivalents. We expect this together with operating cash flows, GPU financing, and additional financing initiatives to support our near-term CapEx program, which includes delivery of the Microsoft contract and deployment of air-cooled capacity across Mackenzie and Childress. For GPU CapEx, we are leveraging secure debt and customer prepayments. As we have noted previously, approximately 95% of Microsoft GPU-related CapEx is expected to be funded through prepayments and GPU financing. We have work streams underway for additional GPU financing to support upcoming deployments. On the data center side, we expect our financing approach to evolve as projects move from development to construction and contracting, and ultimately to stabilized operations.

Early stage development can be supported by balance sheet capacity and corporate level sources. As projects reach construction and customer contracting milestones, asset and project-level financing can be introduced. Assets, as assets are stabilized, refinancing and capital recycling can help support future builds. I know we will continue to maintain a disciplined balance of debt and equity as the platform continues to scale. I will now turn to the financial results, which continue to reflect the transition underway from Bitcoin mining to AI Cloud. Revenue was $144.8 million for the March quarter, compared to $184.7 million in the prior quarter.

Within that, Bitcoin mining revenue was $111.2 million, down from $167.4 million, driven by a lower average Bitcoin price and the ongoing decommissioning of mining hardware ahead of GPU installations. This was partially offset by continued growth in AI Cloud services revenue, which increased to $33.6 million, compared to $17.3 million in the prior quarter. Cost of revenues decreased by $25.9 million, primarily due to electricity costs from reduced Bitcoin mining capacity. Net loss for the quarter was $247.8 million, impacted by non-cash impairments of $140.4 million, primarily related to the decommissioning of mining hardware, as well as $23.7 million of unrealized losses related to capped calls associated with our convertible notes.

As we continue to transition our remaining Bitcoin mining operations towards AI Cloud, we expect to incur additional non-cash impairments associated with decommissioning mining hardware. These outcomes reflect the strategic reallocation of infrastructure toward AI Cloud growth, which we believe is the higher value long-term opportunity. adjusted EBITDA was $59.5 million, compared to $75.3 million in the prior quarter, primarily on account of the revenue and cost of revenue items noted above. As noted, the quarter reflects the ongoing transition from Bitcoin mining to growing AI Cloud. As Dan noted earlier, we continue to target $3.7 billion in the ARR by the end of calendar 2026. We expect that ramp to be back-end weighted with Microsoft revenue and revenue from the additional 50,000 GPUs procured during the quarter, expected to begin ramping in Q3 2026.

I will now turn back to Dan for closing remarks.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Thanks, Anthony. Eight years ago, Will and I asked a simple question: What does the world need to build the right digital future? The answer was power, land, data centers, and compute, and the ability to bring them all together at scale faster than anyone else. Today, that thesis is playing out, and we are just getting started. With that, we will open the call for Q&A.

Operator

First, we have Mike Ng from Goldman Sachs. Please go ahead.

Mike Ng
Analyst, Goldman Sachs

Hey, good afternoon. Thank you for the questions and congratulations on all the progress. I just had 2 questions, if I could. First, on the 5-year NVIDIA AI Cloud contract, I was just wondering if you could talk a little bit about, you know, how many GPUs are being supported by the 60 MW and, you know, the cost per GPU. Second, you know, for Sweetwater and Oklahoma, I think you mentioned the data center capacity is coming in in 2027 and 2028.

I was just wondering if you could talk a little bit about, like, at what point do those sites become marketable, or maybe they already are, and you know, what milestones do you typically need to hit to, you know, increase the likelihood of?

You know, a tenant being willing to take that out. Thank you very much.

Daniel Roberts
Co-Founder and Co-CEO, IREN

No problems.

Kent Draper
Chief Commercial Officer, IREN

Happy to take that one.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Go ahead.

Kent Draper
Chief Commercial Officer, IREN

Dan. With respect to your first question, we haven't disclosed the specific amount of GPUs, but as we mentioned on the call, approximately 60 MW of air-cooled Blackwells. We think that the contract value that we're getting and obviously the relationship that we continue to build with NVIDIA is very beneficial coming out of that contract. Importantly, this is a managed services deployment. It shows our ability to be able to service different segments of the market as we move forward. With respect to your second question, as Dan mentioned earlier, we are still seeing extremely strong levels of demand within the industry, certainly outstripping supply.

What we continue to see as we move forward is that capacity becomes increasingly scarce further out than people were expecting. If we, you know, rewind even a number of months ago, 2027, people were thinking that there was relatively decent amount of capacity available. We're already seeing that capacity available in 2027 is extremely scarce. That is continuing to push into 2028 now as well. For us, there is certainly the ability to market those sites for 2027 and 2028 online dates. As Dan mentioned earlier, we're working through the type of customers that we bring into the mix and making sure that we are structuring the contracts in the right way to enable a flywheel at our end. Certainly the demand signals are very strong.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Maybe just to add to that quickly, Kent. I think to directly answer the question, there's nothing stopping us contracting that capacity today. It just gets easier the closer you get. The focus is on time to compute. The demand we know is there, and all it does is make the conversations and the negotiations that we are having live time for a lot of that capacity much easier when you've got a defined construction and delivery plan rather than trying to make things up on the fly in parallel with a full form agreement.

Mike Ng
Analyst, Goldman Sachs

Thank you very, very much. Appreciate the thoughts.

Operator

Thank you. Next we have Paul Golding from Macquarie. Please go ahead.

Paul Golding
Analyst, Macquarie

Thanks so much, and congrats on all the progress and the new relationships coming in-house. I wanted to ask about air-cooled GPUs in general. It sounds like with the 60 MW deployment at Childress IV for NVIDIA, that will be an air-cooled deployment along with the rest of the uncontracted capacity that you're deploying across British Columbia and Texas. Air-cooled is gonna represent a meaningful part of the strategy. I just wanted to ask how you see efficiencies as well as hardware performance looking so far based on the deployments that you've planned for, and how we can look at that from a financial perspective as well as we think about the model and the air-cooled opportunity. Thanks so much.

Kent Draper
Chief Commercial Officer, IREN

So in terms of efficiency and performance, I mean, what we're deploying across the air-cooled portfolio is the latest generation of NVIDIA air-cooled GPUs being Blackwells. They perform extremely well. There is very high demand for those across all Blackwell GPU types, and certainly continue to see customers finding a very good degree of performance versus cost efficiency from those units over time. And sorry, Paul, I didn't quite understand the second part of your question in relation to how that converts into revenue over time.

Paul Golding
Analyst, Macquarie

That's right, Kent. Just wondering with sort of retrofitting and repurposing of Bitcoin mining infrastructure for these air-cooled deployments, how that seems to be working out maybe from a margin perspective relative to some of the liquid-cooled deployments that you're doing around the Horizon projects, just given the, you know, simpler cooling opportunity there?

Kent Draper
Chief Commercial Officer, IREN

Yeah. From an operational margin perspective, it is slightly more efficient than the liquid-cooled deployments. Where we get the real benefit is, as Dan mentioned earlier, it's very capital efficient because we're taking existing air-cooled data centers that require, you know, relatively little CapEx to retrofit them compared to brand new build liquid-cooled facilities. That is the major difference in terms of the two. At an operating margin level, yes, air-cooled is probably slightly higher, but immaterial.

Paul Golding
Analyst, Macquarie

Thanks. If I could just sneak one more in around Europe and the Nostrum acquisition. As we think about the roadmap there, are you looking to use a similar form factor to what you've used either at Horizon or with air-cooled facilities, or is there a Bespoke form factor you plan to leverage from that platform as you do the European rollout? Thanks.

Kent Draper
Chief Commercial Officer, IREN

One of the things that attracted us to the Nostrum opportunity, and we've been looking at Europe for a while, is that they did have, you know, significant land holdings that came as part of that and access to a large amount of secured power. That gives us a quite a large degree of flexibility as we build out that platform over time. As to the form factor that we use, typically in Europe you do tend to see slightly more condensed build-outs, but we do have the ability there to utilize our typical modular design that we use across North America, which obviously may well bring construction advantages with it.

That was one of the key elements that we saw in terms of the platform that they have and the projects they've developed.

Paul Golding
Analyst, Macquarie

Thanks so much, Kent.

Operator

Thank you. Next we have Brett Knoblauch from Cantor Fitzgerald. Please go ahead.

Brett Knoblauch
Analyst, Cantor Fitzgerald

Perfect. Thanks, guys. Congrats on the, I guess multiple acquisitions over the last week and the NVIDIA partnership and deal. I wanted to touch on Mirantis a bit because I thought that was important to the long-term story. Could you maybe just, you know, elaborate how that fits into your go-to-market motion, how it might accelerate your go-to-market motion when it comes to, you know, landing these enterprise deals, which is also what it seems like the NVIDIA partnership wants you to do as well?

Kent Draper
Chief Commercial Officer, IREN

Yeah.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Absolutely.

Kent Draper
Chief Commercial Officer, IREN

Happy to take that one initially, Dan, and then you can add. It brings with it a number of elements that we think are significantly attractive to our business. The ability to deploy quickly, the ability to service enterprise customers that may require a higher level of software over and above bare metal. They also as a large company that has very big internal engineering resources bring very good capability on the software development side. Further to that, again, having serviced customers for decades, they have an extremely well built out customer support function internally.

All of those elements are things that attracted us to the Mirantis team, and, you know, are able to add to the existing skill set and customer service support that we've already built up internally.

Brett Knoblauch
Analyst, Cantor Fitzgerald

Awesome. If I could maybe just do a follow-up. Just double-clicking on the capacity ramp for 27, am I right in thinking that of the 730 MW, 450 will come from the remaining Childress capacity and, I guess the 280 would be coming from Sweetwater?

Kent Draper
Chief Commercial Officer, IREN

That's correct.

Brett Knoblauch
Analyst, Cantor Fitzgerald

Awesome. Thank you, guys.

Operator

Thank you. Next, we have Nick Giles from B. Riley Securities. Please go ahead.

Nick Giles
Analyst, B. Riley Securities

Yeah. thank you, operator. Hi, everyone. Guys, congrats on all the developments here. I know the IREN team has a lot of experience in developing infrastructure in Australia, but maybe less so under the IREN platform. I was curious if you could walk us through some of the key differences, specifically in power procurement, maybe commercial strategy, so on and so forth. Thanks.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Sure. Look, in some ways Australia's very similar to other markets, and the operation of the electricity market in Australia, managed by AEMO, is very similar to what we see in Texas, as ERCOT. There are markets in Australia which resemble Texas in other ways. Lots of land, good transmission line capacity, good fiber connectivity, and abundant renewables, which isn't located close to other demand centers, similar to what we see in West Texas. There are a lot of parallels. The reality is Texas is just an easier place to do business, and we've been able to accelerate faster there. It hasn't stopped us continuing to incubate projects down in Australia, and we're getting far closer to those projects becoming a bit more of the reality.

I think the demand environment and the ability to service APAC and the demand constraints that we're seeing and hearing in our conversations with hyperscalers means that Australia looks like a fantastic frontier for us, and we'll look to accelerate that in parallel with North America and Europe.

Operator

Thank you. Just a moment for our next question, please. Next, we have Michael Donovan from Compass Point. Please go ahead.

Michael Donovan
Analyst, Compass Point

Hi, guys. Thanks for taking my question and congrats on the progress. How should we think about regional customer mix as the platform expands? Are certain markets globally better suited for enterprise and sovereign AI customers versus hyperscalers? Does that change the expected contract structure or margin profile?

Daniel Roberts
Co-Founder and Co-CEO, IREN

Look, it's going to evolve, and there's a lot of unknowns around this. But if you break it down, like hyperscale contract can mean 2 things. They can mean hyperscaler is using capacity for their own purposes in terms of training and servicing workloads such as their own AI models, or it can mean they're just acting as intermediaries to aggregate capacity for end customers that we're talking to directly. Obviously in the case of the latter, whether you're dealing with a hyperscaler or going directly to the end customer, the end demand is the same. You've got different types of workloads, so inference and training. Inference is a little more latency sensitive. Training, you can probably afford a bit more latency. You know, indicatively, we've had conversations around training models in Australia.

Yes, the U.S.A. to Australia is a long geographic distance, but it's actually not that far over fiber, particularly where you're talking about training models. Given where inference sits today as well, we're all using ChatGPT or Claude. The response times are still, I guess, adjusting to the level of demand and the supply to service it. Look, it will evolve over time, and our objective is to build out an expansive ecosystem of end customers. The partnership with NVIDIA is designed around that. The Mirantis and integration into our business is designed to help facilitate that over time, in addition to all the near-term operational capabilities that it brings out. The goal is very much to build out that diversified customer base over time across all of those markets.

Michael Donovan
Analyst, Compass Point

Appreciate that. A follow-up, if I may. Can you help bridge the 490 MW in Spain from secured power to time to first token? What has to happen before construction begins?

Kent Draper
Chief Commercial Officer, IREN

That is secured power, and the sites across the portfolio there are secured as well. From here, it's a matter of working through final design permitting, which is already well advanced at a number of those sites, and then ultimately construction of those facilities. One of the elements that we found very attractive was the near-term security of power. That is power that is available on a timeline that we think is gonna tie in very well to general European demand, and we are already seeing a number of direct requests from existing and new customers for European capacity.

Michael Donovan
Analyst, Compass Point

Great. Thank you.

Operator

Thank you. Next we have John Todaro from Needham & Company. Please go ahead.

Austin Ortiz
Analyst, Needham & Company

Hi, this is Austin Ortiz on the line for John Todaro. Maybe just a quick question on how do you intend to finance the build-out for the recently announced NVIDIA deal? Seems to be around 5 GW, so just any color on that would be helpful. Thank you.

Anthony Lewis
CFO, IREN

I can take that. Yeah, so as I, the CapEx involved for the retrofitting of the air-cooled data centers in Childress is pretty modest in the scheme of things. In terms of the CapEx for GPU, obviously we've got a range of financing sources available to us. That obviously includes initiatives at the corporate level, but we can also look to finance GPU acquisitions in various ways in the debt capital markets through debt capital as well. We'll be looking at all those initiatives.

Austin Ortiz
Analyst, Needham & Company

Understood. Thank you.

Daniel Roberts
Co-Founder and Co-CEO, IREN

In terms of the 5 gigawatts more broadly, maybe just to address that, and the plan. That's obviously a lot of capital today, but the reality is you don't need all that capital day 1. There's an S-curve of construction that takes time. It takes years to deliver this. This is the whole point around time to compute. It's not just a case of getting power and land. It's assembling multi-thousand construction teams and actually delivering it. The funding for that just is progressive over time. As we've seen, as we continue to deliver, we continue to drive revenue, we can reinvest that revenue in CapEx, and it continues to unlock more and more financing sources over time.

Part of the partnership with NVIDIA, we've announced, they've got the ability to invest in IREN as we commission GPUs. Equally, there's other support mechanisms being discussed to the extent that, you know, we need them. The reality is capital markets are open. They've been very supportive of our plan, and we anticipate that continuing. The moment that that changes, there's a whole world of capital out there in terms of other options, whether you're creative around private markets or otherwise. When you look at the GPU financing, which is the lion's share of that CapEx, the Microsoft contract is a great template. We financed 95% of that CapEx at an average interest rate of about 3% through prepayments and GPU financing.

The capital is out there as long as you sign good contracts, and you show that you can execute and operate this capacity.

Operator

Thank you. Just a moment please. Next, we have Joseph Vafi from Canaccord Genuity. Please go ahead.

Joseph Vafi
Analyst, Canaccord Genuity

Thanks, guys. Good morning. Good afternoon. My congratulations here as well on the great progress. Just a couple thoughts or just some of your thoughts here just to gauge demand out there. I know you threw out, you know, a $3.1 billion contracted going to $3.7 billion contracted in ARR here, exiting the year. Your confidence in, you know, in that uncontracted capacity and signing contracts, you know, how is the demand out there, you know, for that, you know, say, extra half a billion of ARR and what kind of clients you may be looking to bring on board there? Then I'll have a quick follow-up.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Yeah.

Kent Draper
Chief Commercial Officer, IREN

I think, Oh, sorry. Go ahead, Dan.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Oh, sorry, Kent. Look, again, we're trying to reiterate this as much as we can, and I'm very happy for someone to point it out, but there are no idle GPUs. The prospect of there being GPUs sitting there unused, given how structurally constrained this market is, let alone the near term, but in the medium term, it's not the focus. We are having a lot of customer conversations, but all of our operational capacity is fully contracted. We're contracting substantial portions of capacity before it even arrives, and we're in discussions with a variety of customers all the way from hyperscale clients down to AI native labs, for all of that 26 and 27 capacity. When a signature's put on a paper, it just flows naturally.

Our conviction is around the demand supply, and you cannot tap into that unless you bring the capacity online. This is the A customer contract doesn't deliver revenue. Having compute online delivers revenue, and that has been the focus.

Joseph Vafi
Analyst, Canaccord Genuity

Got it. Thanks for that, Dan.

Kent Draper
Chief Commercial Officer, IREN

Yeah. The.

Joseph Vafi
Analyst, Canaccord Genuity

Yeah.

Kent Draper
Chief Commercial Officer, IREN

I was gonna say many of the same things. The one addition I would add is particularly for our air-cooled capacity, where we are adding substantial amounts across second half of 2026 and into the early part of 2027. There is very significant demand on those timelines. Like, that is the most constrained portion of the market, and that is directly what is leading into the dynamic that Dan discussed, where there just are not idle GPUs that are not being used in this market. Everything on shorter term timelines is extremely attractive to counterparties.

Joseph Vafi
Analyst, Canaccord Genuity

Got it. That's great color, Dan Roberts and Kent Draper. Just on your strategy and philosophy around customers and diversification there. If you're, you know, if you are in the catbird seat here relative to fulfilling demand from, you know, multiple parties, how are you looking at your, you know, broadening, deepening, diversifying that customer mix over time? Thank you very much.

Daniel Roberts
Co-Founder and Co-CEO, IREN

It's something that we're looking closely at, Joe. There is no set formula as to the proportional splits between different types of customers. There are benefits in having hyperscale clients in terms of financability, contractual certainty, but there are also consequences in terms of price because you're not servicing the end customer in many of those instances. The ability to service the end customer has been something we've focused on since day 1. All of our early deployments have been very focused on non-hyperscale customers and getting as close to AI natives and enterprise as we can. The Mirantis acquisition certainly helps that. I'm not gonna sit here and say we're going 100% hyperscale, we're going 100% AI native end market. The reality is that blend will just emerge organically over time.

This, again, is part of the close working relationship we've got with NVIDIA. You know, we've spent a lot of the last fortnight in their San Jose office working through how we service all types of customers, all the way from the trillion-dollar hyperscalers through to the emerging AI scale-ups, where a lot of this innovation and development is taking place. It's funny, speaking to someone the other day, you don't need a sales team in this market, particularly when you've got NVIDIA. They see the whole ecosystem, the introductions, the referrals, putting us in touch with anyone that needs capacity. It's just happening in so organically, so quickly live time, that it'll just play out a good way. I think a combination of hyperscale, a hot combination of other is absolutely the goal.

Joseph Vafi
Analyst, Canaccord Genuity

Got it. Congrats. Very exciting times. Thanks, Daniel.

Operator

Thank you. Our last question comes from Ben Sommers from BTIG. Please go ahead.

Ben Sommers
Analyst, BTIG

Hey. Yeah, good afternoon, and thank you for taking my question. I was curious a little bit on older generation GPUs. I know you've talked in the past, as you've seen, you know, the useful life of older generations, you know, for like H100s extend out, you know, further than maybe people had originally thought. Kind of curious what you're seeing on the demand profile there and just, you know, what potentially type of workloads are going on to those older generation GPUs.

Kent Draper
Chief Commercial Officer, IREN

The comments that we made about no idle GPUs, that applies to all GPUs, not just latest generations. You know, older generations, A100s, H100s, H200s, all effectively fully utilized across the industry. The demand picture continues to be strong. In some instances where, you know, you're actually seeing pricing for older generation units climbing significantly and, you know, there's a number of observable pricing points out there in the market where you can see that happening. Yes, the type of demand may shift over time. You may have older generations being used more for inference, but also those older generations are equally suitable for certain types of training.

We just see strong demand across the board, both on the inference and the training side, and that continues to drive demand and elongated life cycles for those older generations of equipment.

Ben Sommers
Analyst, BTIG

Great. Thank you. Then just on potentially, you know, future conversations that you're having for, you know, potential contracts down the line, is there any talk of, you know, prepayment structures similar to that of Microsoft, or just kinda curious what you're hearing in the market on that end?

Kent Draper
Chief Commercial Officer, IREN

Yeah. It certainly plays a role in a number of those conversations and, you know, we are still seeing prepayments being on the table in a large number of instances. That obviously factors in as part of the overall equation. It's not, you know, the single factor that you're looking at. Everything has to go together with a combination of term length, prepayment, credit worthiness, price. Prepayments are certainly very much on the table in the current environment.

Ben Sommers
Analyst, BTIG

Great. Thank you for taking my questions.

Operator

Thank you. I see no further questions at this time. I will now pass to Dan for closing remarks.

Daniel Roberts
Co-Founder and Co-CEO, IREN

Thanks, operator. Thanks, everyone, for joining us today. We remain focused on execution, delivering the 2026 plan, advancing the 2027 build-out, and positioning our now global platform for the opportunity beyond that. We look forward to updating you as we deliver. Thanks, everyone.

Powered by