Welcome, everyone, and welcome to Bank of America's 2025 Leveraged Finance Conference. I'm Ana Goshko on the credit side. I'm the research analyst covering technology and telecom, and we're thrilled to have CoreWeave with us today, and we have Paul Yim, the company's vice president for credit and capital markets, which is perfect for this audience, so yeah, Paul, thank you so much for being with us.
Great to be here today. Actually, before we get started, do you mind if I read this at Harvard?
No, go ahead.
It starts with, "Before we get started, I would like to remind you that CoreWeave may make forward-looking statements during today's fireside chat. Actual results may vary materially from today's statements. Information concerning risk, uncertainties, and other factors that could cause these results to differ are included in CoreWeave's SEC filings.
Okay, great. Okay, so Paul, thank you so much. I know you've been in packed meetings all day long, and I'm sure you've been handling all kinds of very detailed questions. So the purpose of this session is really to take a little bit bigger picture approach so we can kind of slow the speed down a little bit, talk about the company and its role in AI infrastructure. But since it is a leveraged finance conference, we're going to spend some time at the end to just talk about the debt structure and funding plans. Yeah. Okay, so in case there's anyone in the audience new to the CoreWeave story, which I doubt, but just in case, if you could just spend like a minute or two just explaining what you guys do and the role that CoreWeave plays in the AI ecosystem.
I know there's a lot that's a loaded question, but we'll go into more detail.
Yeah, so I guess the best way to describe ourselves is we are the AI hyperscaler, right? We are focused on delivering high-performance compute, cloud-ready infrastructure at scale to the world's biggest companies as well as the world's leading AI labs. Our infrastructure is agnostic across inference and training workloads, and we can seamlessly shift between both. Our customers consistently rely on us for delivering the most performing cloud in the market.
Okay, and then obviously you exclusively work with AI data centers and provide the high compute and the GPU infrastructure. Just, like, a primer here, can you explain the key attribute of an AI data center as opposed to a general-purpose cloud provider?
Yeah, definitely. I think the first thing that pops out to folks is utilization rates are a little different. The equipment that goes into an AI data center is obviously a lot more heavy-duty. Liquid cooling to the rack is now paramount in all of our facilities, and it's just a requirement in order to power some of the high-power density servers versus what a general compute server looks like. And I think you'll see that we've been very purposeful in designing our data center roadmap and selecting sites that either are ready for this development or products that we're building, or they can be retrofitted to suit our needs. And I think that's like something that we've been very tactical about and will continue to do so.
Okay, so CoreWeave is a very physical business. So there's obviously data centers, power chips, servers, racks, but there's also a software component of the business, and I think on the equity side, you guys are mostly covered by software analysts, so could you just talk about why you're a software company?
Yeah. I think definitely from a credit lens, I prefer to refer to us as an infrastructure business, but we should definitely talk about the software side, which in many ways is our secret sauce. I think there's a number of layers to it. CoreWeave Mission Control is our proprietary software orchestration layer, and it really is kind of the differentiator between us and any other cloud provider. And this has been validated by third-party consultants like SemiAnalysis on our performance. But basically what it does is it provides active health management of the clusters, and it's a tool that allows us to autonomously manage our AI cloud.
Beyond active management of cluster, it's observability on the site itself to ensure that we can deliver maximum performance against the longevity of the chips, which is equally important to us when we think about the long-term and intermediate term of our business.
Okay. So there is, I think, a well-known bull bear debate on CoreWeave. So I think we're going to do a little bit of the bull bear here. So first, I'm going to give you the opportunity to just list the key pillars of the bull investment thesis, and then I'm going to straw man some of the other side and kind of let you respond to those.
Yeah, I think that's fair. How I would start is I think the bull case is actually fairly straightforward. You have to believe on some level what our customers already seemingly believe, which is that this infrastructure is, in fact, mission-critical. That to power an AI future, it requires these complex workloads that require a fully developed tech stack in order to service this. This is something we saw well in advance of ChatGPT, right? We've been doing this years, years, years before. If you look back to, I think, NVIDIA's websites back in 2019, well before they were worth their first trillion dollars, they named us as a tier one CSP for that reason. And I think this is something we were going to actively work towards and maintain our industry-leading position.
Okay. So some of the key pillars, I'd say, of the sort of investor concerns, right? So one is really the GPU useful life and/or the residual value. So the risk that the GPUs cannot be released or at an adequate rate at the end of their initial contracts, either due to technological obsolescence or even if the rate is too low, that the diminishing kind of investor return to CoreWeave. So what can you say in response to those kinds of questions?
Yeah, I think it's a fair question. The way I would first point folks to is, number one, we are customer-led, right? When we announce our CapEx, it's usually in conjunction with actually signing a customer. We only spend CapEx on a success basis. It's not speculative when we spend capital expenditures. It's definitely earmarked against our contracts. And similarly, I'm sure we're going to touch on this later on the SPV debt. That's how we design funding plans for them, right? Our contracts must ensure repayment of not only the infrastructure itself, but also the debt as well. So for our purposes of announcing, certainly our investors in the audience on the bondholder side, there's no residual value risk that they're actually undertaking when we think about it because we're already shaping free cash flows, leveraged free cash flow after amortization, after interest, during even the first contract period.
Renewal, in many ways, is a bet that we are taking as a company on the upside of what remains. And you've already heard this to be untrue, actually, on the release from Mike during Q3, right? We announced a big renewal for a 10,000 GPU cluster. It's not a one-off. For us, that's the only example we pointed out because it was a big material cluster. And in that particular case, we saw a customer renew with us at within 5% of the original sale price. I think that's something you're going to consistently keep seeing. And part of that is a function of how high demand is for an inference product. Typically, when we're at the renewal point for a cluster, our customers are probably looking at what it means to keep inference workloads alive there. Now, remember, inference for our customers is a revenue-generating workload, right?
So for them, when they've already budgeted for what the expense is to continue to pay for CoreWeave and maintain access to these GPUs, it's just about making sure that they're still generating a profit on those sites, which is why we feel very strongly about our renewal curves. But I think most importantly, our lenders don't take a risk on renewal because that's not what we—I think this is something that's inherently different from other businesses that do leverage up in order to acquire their infrastructure.
Okay. And then you already touched on one of my next questions, but it's so you cited, which Mike had cited on the 3Q call, that there was a renewal, a proactive early renewal of a 10,000+ GPU H100 contract, which is one of the earlier generations, and that was done at a 5% discount to the original rate, right?
Yeah.
You're saying that wasn't just kind of anecdotally cherry-picked, that that is.
It's not a one-off. We are consistently close to, if not already, sold out of any of our older generation architecture. I think what I didn't touch on there is there's enormous enterprise demand behind all the IG guys or investment-grade guys as well as our big labs that want to keep active use of those clusters coming up from right behind. And part of the reason for that is H100s have been out longer than the GB200s or GB300s, and therefore the software libraries that support them are bigger and wider, and the engineers at these other companies have more familiarity with them, which is why it's driving a lot of demand towards these older generation GPUs, despite the fact that they may be three or four years out in active use.
Okay. So then I think the second pillar of sort of investor concerns is the potential for oversupply. So one of the issues I think you already touched on, but does HPC demand wane when clients move from AI training to the inference stage? And then the second part of that is really right now, it is a supply-constrained environment for capacity, but will the need for CoreWeave's capacity be diminished when hyperscalers build up their own vertically integrated facilities? And by the way, we're kind of moving more towards the inference stage.
So let me start by addressing the inference point. And absolutely not. The short answer is absolutely not. Inference requires way more compute than training does. And the reason why is all of a sudden it's not in a black box being used by the AI scientists that are running it through 100 billion parameters. Instead, it's being demoed by the user base of said AI company and maybe a hyperscaler where the inference side of the house, it's everybody that's accessing ChatGPT or whatever it may be. And as a result, it actually requires more GPUs. Now, it doesn't necessarily require as performant GPUs. What it requires is just more. And I think that's a big fundamental driver to, number one, our renewal pricings and why we feel so comfortable with what we're saying out here and what we're observing in our demand funnel, right?
Just inference is requiring more GPUs, full stop, than versus training. And we haven't even scratched the surface of what is required to develop training at scale. That's what gives us so much conviction that on a long-term basis, do we think supply chain will ease? Absolutely. Do we think it's near? Probably not. And the one thing I would add on the second point of your question on this supply-constrained market, that this is something we've observed in the data center ecosystem for more than 15 years now. Every year we say, in two years, data center capacity is going to open up. We haven't quite hit a point where that's actually been true. And I think when that happens, does that require some recalibration of how costs and unit economics work? Absolutely.
But I don't think we're close to it yet, and we feel very comfortable with kind of our forward forecast.
Okay, and then on this idea that some of your customers may over time build up their own vertically integrated capacity. How fungible is the infrastructure? Can you repurpose your existing infrastructure for other customers?
Yes. So that's actually one of our secret sauce and goes to the software orchestration layer. We are able to quickly, rapidly repurpose clusters. It really just depends on how big the sites are. That introduces some level of complexity, but in terms of speed and pace, we can absolutely do it. It's usually less than a month, but it ranges, right? I don't want to put that out there as the only marker for it. So that's number one. Number two, I think there's been a view long held in the data center world that eventually hyperscalers are going to start building their own and stop relying on the big public cloud data center builders. And some of them are private, some of them are public. We all know who the names are. And I think that's just something that hasn't come to pass.
And part of it is incumbent upon that thesis or this bare thesis that hyperscalers are going to move towards first-party development for their own data centers. And this is going to extend to my analogy on GPUs is you have to believe internet infrastructure has been fully developed and that we're done building more data centers and compute is being serviced at a reasonable and acceptable scale to the world. And I just don't think we're close to it on the internet side. I can promise you we're not close on the GPU side.
Okay. Any other arguments from naysayers that haven't stood up yet that you'd like to address?
Yeah. The only one I was going to point out is, and I think we're going to spend some time on this, so maybe I'm jumping ahead too much. Do cut me off. I think there's been, for us specifically, we've observed some criticism on our use of the capital markets, which I think is mostly linked with the misunderstanding of how we actually maximize our efficiency through the use of capital markets, right? It's in many ways what allows us to equalize how we're able to compete with big hyperscalers that also do what we do, right? Build high-performance compute clusters and service customers at scale, whether it's AI labs or enterprise customers alike. And I think it's been a great tool in our menu of options to service our needs.
I'm not going to spend too. I know you're going to get to it later, but I think the most important thing I'd call out here is in order to support our. I think as of Q3, we said $55 billion of backlog. Even if you pro forma for all of the debt that is required to stand up that $55 billion of backlog, we will have sufficient cash flow streams to not only cover the SPV debt we would raise to fund our growth, but also to repay the bonds that are outstanding on our balance sheet.
Okay. So shifting to the data center side of stuff. So with regard to powered shell capacity, is there a preference for leasing versus owning/self-build? And a couple of related questions. How do funding costs play into this? And then two, after the Core Scientific deal cancellation, how do you think about data center acquisition potential?
Yeah. So let me tackle this a couple of different ways. And I'll start by saying we are building one of our data centers right now, right, in Kenilworth, New Jersey, which is 20 mi outside of Manhattan. It's going to be one of the biggest high-performance compute-enabled sites within an acceptable zone of Manhattan. It's going to make it very important. We are doing that. We have a joint venture partnership with Blue Owl, and we're excited about it. But at the same time, you'll see that we are primarily leasing most of our portfolio. A function of that is the timeline for a data center ownership model and build-out is a little bit incongruent with our current unit economics, right? A dollar spent on an AI server is worth more to us than a dollar spent on a data center site.
Now, as we start charting a pathway to investment grade and our credit improves, our cost of capital improves, are we going to reevaluate this? Absolutely. I think there are probably a set of trophy assets we care deeply about that we would like to build and own, but it's not something that impacts us today in a way that is forcing us to go build ourselves. So I think that's where I would start. To answer your last question on how we think about data center acquisitions, and I know this is going to come in next. We think about acquisitions in two ways. One is strategic. One is opportunistic. Core Scientific is absolutely an opportunistic acquisition opportunity. It was us showing that we were willing to equitize some portion of our lease obligations, right?
We were prepared to do that, and we thought the price we put out was fair. The shareholders felt a little differently, and I don't think that changes anything. Ultimately, we've locked up the power that we want from their portfolio, and we still feel very strong about our partnership going forward.
Okay. So shifting to some more kind of pure financial type topics. So there is a ton of demand right now for what you guys offer, but you recently brought on a Chief Revenue Officer. So why?
Yeah. It's a great point, Brannin. We're very excited about having John Jones join us. Great name. He came from Amazon, and he was responsible for product for Amazon. And I think one thing that a little bit of a shame for our business is we're seeing just enormous growth, as you're highlighting, from the hyperscalers, and it really just dwarfs what we see on the enterprise side. And as we're growing up and maturing rapidly as a public company, right, we have historically serviced a list of a fairly short list of big customers that represent the bulk of our customer portfolio. Now, we've dramatically improved our diversification already from the start of the year. I think when we IPOed, we were showing somewhere around 80%-82% first customer, our biggest customer exposure. Today, it's less than 35%, right? So we've already dramatically improved that.
Our IG exposure is also greater than 60% already. However, there's not a big enough sales organization tackling the next layer of customers that exists out there, and it's enterprises like CrowdStrike, which we just announced a big partnership with, I think, a week ago or maybe two weeks ago. We announced one with IBM at the start of the year, and we expect to do a lot more of that, and that's going to serve as a launchpad for when we think about infrastructure in the outer years, a lot of these customers aren't necessarily needing to use the latest and greatest GPUs, and that helps support our business on a long-term basis. That's fundamentally one of the big reasons why we brought John in, and we're super excited.
Okay. Next topic is speaking of big customers, but so you have a $6.3 billion backstop from NVIDIA, so could you explain how that works, and then I think it might be useful to recap NVIDIA's position as both a vendor customer and then the third largest shareholder of the company.
Yeah. So I think what you're going to see as we grow our business and build it going forward is we're going to have to show a little bit more of a bias towards investment-grade offtake. And a big reason for that is we are on a journey to hit investment grade. It's a little bit of a North Star for us. I know we're going to touch on it later, so I'm not going to skip ahead, but part of that journey is we need to sign big investment-grade contracts in order to show a pathway of achieving that outcome.
Now, that contract we signed with NVIDIA is an interruptible contract, and it basically allows us to use their credit to acquire and build an SPV to raise debt against their credit to buy GPUs where we're then able to sell to another customer, AI lab, or what have you, that perhaps isn't able to sign a five-year contract today, right, on the latest and greatest chips. That contract we announced was for GB300s. Today, if you're coming to us for GB300s, you basically have to lock in for a four or five-year contract. There really isn't an example of where we're going to build a big cluster of GB300s and sell it on a shorter than three-year basis, right?
But if you're an AI lab that hasn't raised enough money yet, you're not going to be able to commit to a contract greater than six months, a year, two years. But it's very important to us that we give them an opportunity to maximize the use case of highly performant infrastructure, number one. And number two, get them an opportunity to put a product out there that hopefully gets them a chance to raise capital and put them on the map in some way. And I think that's something that NVIDIA saw, and this partnership is to further that journey, if you will.
Okay. So I'm going to touch on now sort of the guidance change that happened around the third-quarter earnings report. So you had a strong third quarter, but then you ended up lowering the revenue guidance for 2025. And so $100 million-$200 million lower on a base of $5 billion.
Yes.
Okay. And that was all attributed, I believe, to a data center vendor construction delay, which I think now is, you explained, is slipping from fourth quarter to the first quarter. Is that fair?
Yes. That is entirely fair.
Then, secondly, in terms of the operating income guidance, that was also reduced by $110 million. That's now $690 million-$720 million. That was due.
Also, to the same reason.
To the same reason? Okay. But you're also bringing on, despite that delay, you're bringing on 260 MW of active power still in the fourth quarter, which is like a big step up, right?
Yeah.
There is some element of having that come online and the cost of that coming online before the revenue fully ramps up. Is that fair?
Yeah, that's fair. I think it's just something. It's more of a quirk to the business model than anything else. Oftentimes, we're paying for data center expenses before we bring the clusters online, right? There's a little bit of a time lag. It's not long between data center readiness and customer readiness. And that's a little bit of the quirk that you're just observing.
Okay. And then any update on that data center construction delay? Do you feel confident that that's going to come online now in the first quarter?
We feel comfortable with the guidance we provided in Q3. I think you heard from Nitin, and I think that's still consistent today.
Got it. Okay. I'm going to skip over a couple of things in the interest of time here. So CapEx. So CapEx 2025 was reduced quite a bit because of that.
All related to the same.
All the same things. It's $8 billion-$9 billion now. It was $12 billion-$14 billion previously. I think from a cash flow perspective, a lot of that is still being spent because it's work in progress.
Correct.
So it's really just sort of a recognition of kind of when it hits CapEx.
Exactly.
On the accounting statement.
Yeah. It's basically a timing of acceptance and delivery of the site. It's creating this little time mishap, if you will.
Okay. So now switching to the debt structure. So there's an evolution of the debt structure, right? And you have alluded to a contract-first financing approach. Can you talk about that?
Yeah. I think it's been very much a guiding principle of ours, which is we're only going to spend CapEx if we sign the customer. If we sign the customer, we need to be able to raise financing to support it. And the historical way we've gone about this is we reach out to private credit. We've got great relationships with Blackstone, BlackRock, and a number of Magnetar, a number of other folks in there that help structure what looks like an asset-backed financing package that still receives a parent guarantee from CoreWeave, Inc., which is why it still shows up on our balance sheet. And when we first did this in 2023, I think the understanding and acceptance of what this looked like was still developing, or not a lot of folks were comfortable with what GPU cloud looked like or GPU hyperscalers looked like.
Going forward, I think one of our future ambitions is basically raising these, especially for our investment-grade offtake customers, on a non-recourse basis without a parent guarantee. And there's a number of benefits to that. Number one, it's getting the look-through credit to our customer, which in many cases is very highly rated, AA, AA- or AAA. And to the extent that we're able to achieve that outcome, we'll be able to capitalize on their credit profile, similar to how the data center ecosystem already structures project financings, right? When a big data center operator is building a site, they're usually structuring the exact same vehicle. It's a project finance box that the look-through is to the customer, and they borrow against that rate. I think we saw Meta do this recently with the Hyperion Data Center.
We saw a couple of other folks that do this consistently in the market. Our goal is to do this for ourselves first on the GPU side, and our expectation is to be able to scale this up, especially as we build our investment-grade backlog.
Okay. And so, I've heard management talk about that your balance sheet is actually a diverse set of balance sheets. So that's really what they're referring to, right?
Yes. We have various SPVs that support different sets of contracts, and they have different sets of terms and credit spreads on them. But at the end of the day, they all flow back up to the parent, right? I think what's important to note is all of our debt is self-amortizing at the SPV level, right? Our lenders down there don't take renewal value risk, and our parent investors or investors in our equity story or our bondholders, they get to enjoy the benefit of the levered dividend or free cash flow yield out of the SPVs back to the parent. And right now, we have an umbrella of three or four SPVs below us. We expect to continue to do them. We expect to outperform versus relative to prior deals, and hopefully, that's where we're able to rationalize some of the cost of capital advantages that we have.
Okay. So right now, the run rate EBITDA is about $3.4 billion. Your debt's $14 billion. And you've got $3 billion of cash, right? So your leverage right now, 4.3, 3.4 net. Your CapEx for next year is guided to be two times what it was in 2025. That's $24 billion-$28 billion. So some of that, as we talked about, is actually really being spent this year. It's just getting recognized as CapEx next year. So nonetheless, it's a big number. So could you talk about what are the sources? I think you've really addressed, but if you kind of pull it all together, what are the sources for funding your CapEx needs? You do have cash from operations, and then you have upfront payments from some customers.
Exactly.
You're going to put in this committed financing. Is there going to be need potentially for more unsecured financing in your?
Not explicitly stating it. We don't need to access the bond market, to be clear. We can fully support our $55 billion backlog. Obviously, some of that is already spoken for by the existing SPVs we've raised, but our expectation is to be able to raise more SPVs that either have more efficiency or allow us to tap into different financial instruments that, once again, isolate to look at the contracts as opposed to the parent itself. And that is going to enable us to look at a bunch of different ways. And I think what's most important to recognize is, to be clear, our first GDTO 1 and 2 that are outstanding didn't require equity injections because of what you highlighted, which is some of our customers still contribute upfront prepayments, and they use it as a means to toggle the contract value on a dollar per GPU rate.
It's a great instrument for us to maximize our credit in the outlook so we don't have to lean as heavily on parent financing means. Does that mean we're not going to look at the high-yield bond market? Probably not at the current levels, but it absolutely does mean there's going to be a time and place. We're probably going to try to build our portfolio in a way that's more thoughtful, that's more credit-worthy, and allows us to access a higher throughput capacity of capital markets.
Okay. And then so I did cite, so right now, the gross leverage is mid-4s, right?
Yes.
Does that go up before it comes down?
Yeah.
I'd like you to talk about your IG North Star, how you're going to get there.
I think this is a great point to spend some time on. We actually like to think of our debt, and I know a lot of folks in the audience know this too. We like to think of our debt on a pro forma basis. It's more fair to, if we're going to get credit for the backlog and what the EBITDA would look like on a ramp basis, we should be burdened for the debt as well. And I think we spent some time on this back in May with a lot of the folks in the room on highlighting how new contracts we sign are ultimately just deleveraging events upon stabilization. Stabilization is basically when the cash flows are ramped, meaning sites delivered, clusters are built, customers accepted. Upon stabilization, most of our clusters are on a ramp basis.
Individual SPV by SPV represent roughly between two to three and a half times net leverage. So when you account for where our current leverage is, it's absolutely going to be deleveraging events as we announce new contracts. Now, the only thing to that is, to your point, there is a little bit of a lag period in between build to ramped. Now, what's important in here is I gave away a little bit of our secret sauce that's going to come up next, which is doing non-recourse facilities. When we do non-recourse deals, it is functionally remote from the parent, right? If anything were to happen to the box itself or the SPV, there is no impact to the parent itself, and the parent does not need to cure the SPVs, which is the objective of our future financing packages for some of these big contracts.
I think that's going to be a big driver in how we message and communicate to the rating agencies on our pathway to hitting investment grade. I don't know the exact time frame, and we're obviously an interesting company just given our growth profile and how much we spend, but so long as that expenditure is being locked away in a vehicle that's away from the parent and what comes out of that SPV is an investment-grade stream of dividends, I think we're going to be able to chart a fairly good narrative on how do you understand the business evolution from where we are today to what we would like to look like in three years.
Okay. Great. So almost out of time. I skipped over a couple of topics, but we appreciate all the time you've been spending with investors. So hopefully, everyone has a chance to ask additional questions of you. And you were super efficient in this session, so I really appreciate it because we hit on a lot. With the minute that we've got left, is there anything that we didn't touch on that you think is important? Any closing comments about what you guys are most excited about?
We're super excited about the demand profile that we're staring at for the next year. I think there's a lot of talks about what's going on out there. I think you can kind of read between the lines when you're just looking at the earnings reports for a lot of our biggest customers, right? They're not observing a slowdown because the demand does exist out there. And I think so long as that does translate, we're going to be the beneficiaries of a lot of how this unfolds on a long-term basis. Remember, we're infrastructure builders. We are a software company that builds infrastructure. And while we are hopeful that each and every one of our customers is individually successful, in many ways, it doesn't actually impact us on which one does get there.
The reality is this current supply-demand market necessitates that each of our customers locks into longer and longer contracts with us. I think that's the most resounding supporting information or rebuttal, if you will, against what people are talking about, the useful lives of the GPUs, in addition to what we're talking to you about with respect to renewals.
Okay. Great. Okay. With that, we're just out of time, so it's been perfect. Paul, thank you so much for being with us.
Thank you.