OK, great. Thank you, everybody. I'm Karl Keirstead. We've got Tim Arcuri, who covers NVIDIA, and honored to have Nitin Agrawal of CoreWeave here. You know, today's been a fun day for me because this has been the theme of today, partly by design. But you know, we had Nebius on stage earlier today. We just got off stage with Crusoe and Lancium, who are building out the next Stargates. Now we've got Nitin to talk through CoreWeave's story. Shortly afterwards, I think in a couple of hours, we've got Blue Owl Capital and Magnetar to talk about how they're financing all of this build-out. So this has been a fun day, actually, to go deep into this whole GPU cloud build-out. Did you want to start with Nitin's got some profound words he'd like to share at the very start?
Yeah, this is the CYA.
All right, before we get started, I would like to remind you that CoreWeave may make forward-looking statements during today's fireside chat. Actual results may vary materially from today's statements. Information concerning risk, uncertainties, and the other factors that could cause these results to differ are included in CoreWeave's SEC filings.
We did it. Thanks.
Great. Thank you, Nitin Let's chunk this up a little bit. Let's talk about some demand-related questions. We'll talk about some supply issues as you're scaling up the infrastructure. Tim's obviously going to want to talk to you about GPUs and TPUs and stuff like that. But let's start on the demand side, so obviously, CoreWeave put up 134% revenue growth this past quarter. Your backlog is $55 billion. That's 10x your revenue run rate. Things appear to feel pretty good. You've begun diversifying more away from the earlier high concentration with Microsoft. Maybe, Nitin, I can ask you to just describe to some extent the demand you're seeing today. I mean, it feels phenomenal, but you can go a little bit deeper.
Yeah, I think the words we typically have used over the recent past to describe it have been. They've ranged from insatiable to relentless to tremendous. You know, it seems that we have, even within this year, kind of had a couple step functions upward in demand, where I would say to start the year, demand was pretty relentless. And then we found ourselves over the summer where seemingly all of our customers wanted, or most of our customers wanted a whole lot more, and a lot of potential new customers wanted a whole lot more. And yet where we find ourselves today, as we kind of said on our earnings call a couple of weeks ago, is, it seems like that's taken another step upward. Right?
I think as we see more use cases for the compute that are delivering ROI, that are transforming the business industry world, right, more people want more, and frankly, we continue to find ourselves in the position of how do we bring on capacity quicker to service this demand, because that continues to be the constraint on growth for our business.
OK. Let's talk a little bit about one layer deeper and your judgment on where that demand is coming from. Is it that the model providers woke up and realized the merits of reinforcement learning? They needed more compute. Is it that everybody underestimated, let's say, the compute intensity per prompt on consumer AI products like ChatGPT? Therefore, you need a lot more inference compute. What was the trigger for these step-ups in compute demand this year, as best you can estimate?
I don't think there's a single one. I think it's many things all in one. I think scaling laws are continuing to hold in a way, right, where even on a pretraining basis, you want more. I think the proliferation of post-training and other types of training that are very compute-intensive and hungry catalyzes further accelerate that. And then on top of that, yeah, you've seen from an inference perspective, the world see innovation there that is also more compute-intensive, namely reasoning models or chain of thought. And I suspect that as you see more AI Labs and enterprises bring AI products to market, those products will continue to become more compute-intensive and hungry. And so you have all of these tailwinds pushing to the same bottleneck of they all point in the same direction of more compute.
Maybe a year ago or 14 months ago, I think the world very simplistically thought of training as compute-intensive. You do it once, then you just ship a model, and it's just about latency. And inference is very compute-light. And you ask a question, you get an answer. Like 14 months ago or so, I think o1 was released, and the world saw a new vector of inference. And I think the proliferation across those three things is the answer.
What about demand from a customer cohort?
Yeah.
Obviously, the frontier model providers have been fueling a lot of this compute demand, much of it. But now we've got a number of smaller AI natives, the Poolside of the world, Cursor, et cetera, fueling demand. And at some point in time, the enterprise side will kick in. So where are we in that demand curve for maybe you think of it differently, but frontier model providers, AI natives, and traditional enterprises like UBS?
Sure. I would say from a frontier AI lab perspective, it is more pressing. It's a question of how do you access that greater compute and continue to grow your footprint. I think that the runway there is very long. Based on where we are today, maybe it won't always be this way, but it almost feels endless. I would say on the enterprise side, we are undoubtedly earlier in the journey. I would say it's clear away from hyperscalers and away from how folks like Google and Meta are clearly using AI and GPU compute to re-accelerate growth and drive returns in their business. When we think about the use cases, I think clearly there's going to be a role to play for AI Labs who are helping productize AI and sell.
The growth that has been just unbelievable of OpenAI and Anthropic and businesses like that that are adding billions and billions of billions of ARR in a given year at an unprecedented pace is validating that. But I would say the maturation of the application layer is clearly early days. It's not lost on us that some of those enterprises are going to need some help to get there. We made an acquisition of a business called Monolith a couple of months ago that was really centered around helping bring AI to the physical world. What they do and what we'll be doing together is focusing on more compute-intensive industries like industrials and, in Monolith's case, autos, that they have mature workloads and they have a clear use case. One of them, for example, is battery optimization and experimentation.
But they don't necessarily have the product to do it. And so we think about how we can deploy our own software and services to help accelerate that for some of the older world enterprises while also serving the AI L abs that are clearly driving that market ahead.
OK. I'll ask you two more, and then we'll turn to Tim. On the supply side, Nitin, obviously on this last call, you highlighted some push-outs. Now, that's probably, I'm sure, standing up gigawatt-scale AI campuses is one of the more complex go-live supply chain problems that anyone will ever see. So the notion that there could be a quarter delay due to partner delays, I don't think is shocking to anybody. But can you describe perhaps your confidence in hitting those supply targets and if there's any new supply bottleneck that seems to be bubbling up that might be interesting to flag for the group?
Yeah, and you're absolutely right. The scale of which and the pace of which these data center and AI campuses are being built is just simply unprecedented. And there's not a playbook. The joke, or I don't know if it's a joke, but the analogy I like to make is like you're asking someone to build the Death Star LEGO set without the instructions. It's just like you don't know what you're doing. You figure it out, and it makes you better for the next one. I would say the supply chain issues that we're seeing are not new. We've been pretty artfully, I think, navigating those for the last couple of years, and we will continue to. And it's not a single thing. People ask, is it that there isn't enough labor? There isn't enough labor. People ask about long lead time equipment.
There isn't enough long lead time equipment, and it's called long lead time in part because it takes a while to get it, and so we're seeing the confluence of those things, but again, we've scaled out clusters at now, I think, at 41 data centers across North America and Europe in the last several years alone, we've been doing that while working through this, and we will continue to do so. When it comes to the fourth quarter, we got on our third quarter earnings call, and we were pretty disappointed that we had to revise guidance because something was slipping versus kind of the expectations we had underpinning our guidance. A matter of weeks, I can count on my hands, but when it slips from quarter- to- quarter, it appears to be pretty acute. We talked about how the vast majority of that comes online in Q1.
We are tracking very nicely versus what we said a few weeks ago on our earnings call. But I would say, how do we keep working through it? I would say self-build helps a bit. We've talked about our first couple self-build projects in Kenilworth, New Jersey, and Lancaster, Pennsylvania. An externality of that has very much been that we've been investing this year in meaningfully growing our data center teams, data center technicians, project managers, et cetera, and that enables us to not only have more of a self-build capability, which is a small minority of what we do today, but it also allows us to have more boots on the ground in more places and take our own view of the timing and development of those sites such that we can more transparently and accurately communicate those to our customers and folks like the people in this room.
OK. Let me ask you one more. We'll hit you with a hot subject. So that's the AI bubble concerns that people have on their minds. Because I'm listening to you, and you're talking about insatiable demand. You're expressing confidence in standing up significant amounts of supply. Yet we can contrast that with some thoughtful investor concerns that you and the whole GPU cloud infrastructure business are overbuilding, that the demand is not going to be there, Nitin, in three years' time. So take that head on and give us your rebuttal.
Absolutely. First and foremost, I disagree with the assertion that there's a bubble. Like when we look at the pace at which AI is being rapidly adopted and monetized, and I talked a few minutes ago about the tremendous and overwhelming growth it seems that, of course, we're experiencing, but that's a direct output of leading AI Labs and how quickly they're growing. We're seeing how the hyperscalers are monetizing, and we're all saying there's just simply not enough capacity. I think people like to make a comparison to some other things like the dot-com bubble, where it's like, and other people can yell at me if they disagree, but from my perspective, it doesn't seem like you were really ever hearing in 1998 or 1999, "you don't get it. There's just not enough fiber." It was kind of quite the opposite. We're building for something on the come.
We are just trying to build what our customers are demanding from us right now. We are signing longer-dated take-or-pay contracts for capacity that comes online in the next nine-12 months that they are committing to use for the next five years. The cash flows from that are paying for that CapEx. They're naturally deleveraging to pay down our debt and delivering us free cash flow. And that's leaving us in a position where we will have effectively depreciated and paid for infrastructure, some cash flow, and the opportunity to continue to monetize what we think is the most performant solution in the market going forward. But it's just very little of what we do and nothing of what we do on the GPU CapEx side is speculative. We are building to demand. We are not trying to outpace it. In fact, we are struggling.
I think a lot of the hyperscalers have said they are struggling just to keep up. I think that is a very different paradigm than some of the other kind of situations people claim to be analogous.
Yeah, got it. OK, helpful, Nitin. Over to you, Tim.
Great. Thanks, Nitin. So another debate you hear about is vendor financing. And you signed this deal with NVIDIA. And at the time, people thought, well, that's vendor financing, but it's not at all, really. So can you actually talk about that?
Yeah, you're absolutely right. And we got some other questions about it too, just based on headlines. And I think it's misunderstood, so thank you for asking. We announced a $6.3 billion partnership or collaboration with NVIDIA, I think in September. And that looks like a customer contract, like virtually all of our other customer contracts, save for a key exception, which is this concept of interruptibility, which allows us to say, NVIDIA, we're pausing your access to compute. We know you have use cases for it, and you would love to have it. But we're going to pause it, and we are going to go sell it to somebody else.
Who that other somebody else is in this instance is, think of the small and medium-sized companies or smaller AI Labs that are not in position today to commit to the five-year contract that we require to go buy the server for a frontier-scale cluster. And that means that the barrier to entry is pretty high. Because if we don't think you can afford it, we're not going to go buy the servers and build the cluster on your behalf and give you the capacity. And so what we're seeing here is it's almost like a product, where it's a win-win-win for us, for NVIDIA, for the end customer who we interrupt NVIDIA with, where, hey, if no interruption happens, which is unlikely to be the case anytime soon, that NVIDIA gets the compute and they have a use case for it.
But more importantly, we are able to go service and acquire the smaller customer who wants to be on our platform, that we want on our platform, that NVIDIA would, I'm sure, love to have on our platform because we do deliver the most performant compute in market, but they just can't afford it, and so they get access, and we don't sacrifice our discipline around CapEx while being able to go out and acquire a customer that otherwise the barrier to entry would be too high, and hey, knock on wood, some of those customers grow, and they graduate into their own direct contracts because they've scaled, but a big part of their scaling is likely because they had that access that we have unlocked through that partnership.
Right. So it's not like they do a deal just for the sake of moving GPUs. This is for the sake of expanding the TAN.
Absolutely. Expanding the ecosystem and reducing that barrier to entry for those long tail of customers who aren't the world's leading AI Labs who have seemingly limitless access to capital.
Great, thanks. And then another question, one thing that I like to look at to determine the supply-demand balance is I like to ask companies like CoreWeave what the pricing is for the oldest instance that's being used for AI workloads, which in your case is Ampere.
Yeah, it's certainly Ampere, and then it's L40 and Hopper. And the pricing for each of those, Mike went on TV the day after earnings. In fact, we said we're virtually sold out of Hopper, of Ampere, of L40. And when you look at the pricing quarter- over- quarter- over- quarter, it's proven to be remarkably stable. And so all signs point to stability there. We talked about on our earnings call as well, we had our first, and I'll highlight it's our first. It's not an example. It's the only example we got of a large-scale, so call it 10,000+ H100 cluster begin to approach contract expiry. So it was probably two quarters in advance. And the end customer there said, we want to go recontract this out for an extended period of time.
They agreed to pay an ASP that was within 5% of what they agreed a few years ago. So why'd they do that? We don't know the use case. We build fungible. We don't ask training or inference. That's for the customer to decide. Our software kind of helps them do the rest. Best guess is that customer was likely using that for an inference use case. They saw a really attractive ROI on that cluster and said, "We are comfortable continuing to pay a very similar price per GPU hour because we understand the ROI. It is tangible, and it is attractive to us. So let's go keep doing that.
Got it. And maybe just one last from me. There's this notion that you're going to wait around, and you're going to pick and choose generations from NVIDIA. You're going to skip over one generation, not just CoreWeave, but just customers generally. How do you view this idea that customers will, oh, I'm not going to buy Rubin. I'm going to wait for Rubin Ultra. Or I'm not going to buy Blackwell. I'm going to wait for Blackwell Ultra. How do you kind of think about that?
Yeah, I think our experience has been more of a yes and of we're going to buy some of this, and we're going to buy some of that. And we are pretty deeply entrenched with our customers and understand, hey, you want to buy some Blackwell, but you're not going to buy everything in just Blackwell right now because you know you're going to buy some Rubin later and probably some Feynman after that. And so we take pride in being the first to market with seemingly almost every generation of NVIDIA GPU technology over the last couple of years. I would expect that to continue based on the customer conversations we have. We think that the demand is pretty rampant across generations, and people are being thoughtful around how much they buy of this with that in the future.
But we haven't seen someone say, "Oh, I'm going to skip over this one." It's more, I would say, of a timing perspective of, let's say, we're beginning to talk about a data center that might come online in early 2027. Then the customer has to make the decision of, do I want to keep working with Blackwell because by then I'll have scaled Blackwell clusters, I'll have configured software, my engineering will have devoted time to it, or do I want to get started and push that to Rubin? But I haven't seen it as much as one or the other as much as how do we sequence the timing of everything.
Got it. Back to you, Karl.
Yeah, I'll ask you two or three more, and then there might be time for one or two more for you, Tim. So on the fungible infrastructure question, let's dig into that a little bit because all of us listen to Satya on his pods. And he's very fond of saying that Microsoft is building a super fungible infrastructure, not for any single customer, any one location, any single workload type, but others are. And therefore, they're absorbing a lot more risk than we are. Can you comment on what CoreWeave is doing?
I would say fungible fleets, I think, have been something we've been pounding on the table on for the last several years, and part of that is location. We have some larger campuses we're working on in certain locations, and we have some other smaller campuses where location we think matters, and we think kind of customers want a mix of all of the above. It's not necessarily one or the other, but more importantly, it's the software. It's how we build the technology stack to fungibly serve across training and inference such that our customer doesn't tell us what they're going to do. We enable them to do training one day and inference the next, and I do think that fungibility really matters.
I do think the flexibility for us to be able to take a cluster and turn it into a bunch of small clusters or one massive cluster, or to have one customer use it one day, and a couple of weeks later, the next customer might be using it for something else, that's pretty critical to the evolution of workloads in AI that will continue to develop in the coming years.
OK. Let's talk about another aspect of the street concern, and that's not so much on will demand be there. It's more like will financing be there because Microsoft and Meta and Google don't need to lean on debt capital markets or vendor financing or GPU lease backs to finance it, but CoreWeave does. And by the way, so does Oracle, and so do several other emerging GPU vendors, so how would you describe, as best you can, the state of the financing demand right now for building out these AI infrastructures? As I mentioned, we're going to have Blue Owl, Magnetar up here this afternoon, so trust me, we'll ask them.
Two of our partners.
What's your perspective? How healthy is it today? Are you seeing any pullback?
So I think taking a step back, CoreWeave has been built on, I think, two vectors of excellence: technological and engineering excellence, and then excellence in navigating the capital markets and designing our customer contracts in a way such that they are maximally financeable, such that it's no secret there's been some volatility in the equity market, even in the bond market over the past few months. But I think you've got to focus on how we primarily finance our business, which is these asset-level delayed draw term loans that we designed over the last few years and kind of industrialized the way we write customer contracts and raise that capital such that we can access it at increasingly lower costs of capital across market conditions.
The two kind of key components to do we have access to that market, or maybe it's three years, are the contracts written the right way? We know how to do that. We pioneered this market. It's, can CoreWeave execute? I think we, over the last few years, have built the track record of excellence there. Frankly, our first delayed draw term loan when we were pioneering this market was for investment-grade credit. That was at a price of SOFR plus 962, I think it was. Was it that expensive because people didn't like the investment-grade customer? No. It was because people hadn't seen CoreWeave execute.
But as people have gotten more comfortable with us and understand this is what we are best at, we've driven down that cost hundreds and hundreds of basis points to the point where over the summer, we financed an unrated customer at SOFR plus 400, where we're talking about 500- 1,000 or 900, I guess, basis points of savings right there. And what we are continuing to see is the depth and breadth of that market is incredibly robust if you understand how to, like I said, write those contracts, structure that debt so it's self-amortizing, and also build your backlog in a way such that you are being mindful of things like creditworthiness. We talked about how north of 60% of our revenue backlog at the end of the third quarter was investment-grade.
From a technology standpoint, what is CoreWeave's enduring advantage when, let's say, any of your large contracts come up for renewal? The world's no longer supply-demand constrained. They pull that into first-party data centers, and you need to sink or swim based on how good you are executing from a technology standpoint. Anybody can acquire data center space, cable together NVIDIA GPUs in a server rack. I'm massively oversimplifying. But what is your special sauce, Nitin?
So I've got to argue for one thing before I answer the question, is I think building supercomputers is a whole lot more complicated than buying some racks, bolting them into the ground, and pushing the on button.
Yeah, I understand. I oversimplify.
But by the way, I think we have to continue to kind of explain that to the world. And I do think we're at a point in time in which the world might be more difficult to differentiate the quality.
That's what I'm getting. Relative to others.
Yes. And I think so now to dive into the meat of your question, we have purpose-built this cloud from the ground up to deliver maximal performance. That has been an advantage of ours that has allowed us to grow our footprint and acquire customers at a rapid asset utilization rate. We are continuing to innovate. We are developing new products and services that fit within GPU or AI cloud that are not germane to general compute but are purpose-built for these types of workloads and what customers will increasingly need in the future. Take things like AI object storage. That's a product that fits into our broader storage business that we announced in the third quarter has grown to north of 100 million of ARR, grown like a weed.
That product was built in direct, I would say, response to the advantage position we're in, which is that we are deeply entrenched with our customers on a technical level. We understand what their pain points are. And when we see something that is missing, we either go out and build it or buy it. And so take storage. I think we used to live in a world where you were a single cloud customer. You were an AWS customer. I think you are now increasingly an AWS customer and an Azure customer and a CoreWeave customer. And storage was not built for that world. You had data lock-in. You had high latency if you want to move it to another cloud. You had high egress fees and transaction fees. So what do we do? We built a product that's low latency with no egress or transaction fees.
Customer adoption and attach has been very attractive as we've gotten started in bringing that product to market. We are going to keep building those types of products and services such that you look five years from now and you say, wow, there's a corollary to what the hyperscalers did with the CPU cloud, but they did it specifically for this technology and this world.
Got it. Tim, you want to take us home?
Sure. So there's also this debate, I would say, that's come up lately about alternatives to GPUs. And you build whatever the customer wants you to build. So can you speak to that? Do you have any demand for, number one, AMD GPU? And most importantly, what I'm asking for is, do you have demand for any ASICs such as TPUs? Do you have customers coming to you and saying, hey, I want to do development on TPU, so you go out and add any capacity on TPU?
So you pointed out we're customer-led in everything we do, whether it's entering a new geography or scaling out a different type of accelerator. I would say demand continues for us to be overwhelmingly for NVIDIA technology. I think if there comes a point in time in which we start to hear something different from our customers in a scaled way, not just like a phone call here or there of what do you think of this, then that may change our behavior. But for now, kind of all signals that we get to, we need more NVIDIA GPUs, please.
Do you think that there's any, as the inbound calls to you, I get the sense that there are some more inbound calls who people want to talk about, well, what if we wanted AMD or we wanted an ASIC?
Oh, yeah. No.
Calls coming more frequently on that?
Not in any notable way. But I would say we built this technology stack from the ground up to be fungible across silicon. We were prepared for a world in which our customers want different things, and we want to give our customers what they want. And that embodies what we do. That doesn't mean we don't explore and make sure that we are prepared to work with other types of accelerators. But again, now the pattern and the trend has been the trend, which is NVIDIA GPUs, please.
Great. I think that's all the time we've got. Down to one second. We squeezed every bit out of it we could. Nitin, thank you for coming. Having CoreWeave here, I think, makes this conference phenomenal in terms of the GPU tracks that we've got here. So appreciate your attendance.
Thanks for having me.