Great job with the lights. Thanks everyone for joining. Appreciate everyone making time. This is the start of the ninth annual Wells Fargo TMT Summit. Looks like we're gonna have some sun. Thanks for those of you who made it in last night. There's a bit of rain that looks like it's passed. Very much looking forward to the next couple of days of content across various tech sectors. I'm Michael Thorne with the software team here. With us to start this morning, we have CFO CoreWeave, Nitin Agrawal. Before we get started, the company has asked that I read through a quick safe harbor statement. So bear with me for a moment. Before we get started, I would like to remind you that CoreWeave may make forward-looking statements during today's fireside chat. Actual results may vary materially from today's statements.
Information concerning risk, uncertainties, and other factors that could cause these results to differ are included in CoreWeave's SEC filings. With that, Nitin, there's a lot I think we'll attempt to cover in the next 30 or so minutes. Maybe for those who aren't as familiar in the room, could you just start with CoreWeave, the company, what you do, the problem you're solving, the types of customers, and then we'll get into some of the more detail-oriented questions from there.
Awesome. First of all, thank you so much for having me. Really appreciate, you know, being here. CoreWeave is the essential cloud for AI. What we're doing is we're solving a problem for the AI infrastructure that exists today in terms of, you know, what the highest and the most performant customers for AI require. From developing workloads and infrastructure for workloads which require parallelized computing, that is what we specialize in. If you look at the cloud that was built two decades ago, it was built for a very different use case. It was built for what you call serialized workloads. What CoreWeave is doing is building a custom-built AI infrastructure which is the most performant and scalable infrastructure for AI. That is the problem we are solving, which is why we've, you know, customers have rewarded us with what they have.
We count as large AI enterprises and hyperscalers, as well as large AI labs as our customers. Some of the known large customers that we carry on our platform include OpenAI, include Microsoft, Google, Meta, and smaller labs like Mistral, Cohere, and so on and so forth. We are the frontier for all the AI development that's happening in the industry from an infrastructure standpoint.
Yeah. That's a very good, comprehensive, well-practiced overview. You reported earnings last week.
That's correct.
and there's been a lot of just back and forth around AI and, and sentiment in the market. Maybe you can level set with the key points and takeaways for investors coming out of Q3.
Yeah. So a couple of things, like, you know, from a demand perspective, we've talked a little bit about this. Demand continues unabated for us. What we are seeing in terms of demand cycle from our customers is stronger than ever before. That's validated with our platform in terms of what we've been able to deliver. We've nearly doubled our revenue backlog in the last quarter alone, up to $55.6 billion, adding over $25 billion of revenue backlog in Q3 alone. That's a strong testament to the investor demand that we currently see in the market. In addition, we've continued to perform related to everything that's going on in the market in terms of our delivery. We delivered, you know, $1.4 billion of revenue and about 16% adjusted operating income. We continue to scale our platform tremendously.
We added about 600 MW to our contracted power portfolio, which is now up to 2.9 GW. We have a, you know, more than a gigawatt of capacity available to be sold to customers, you know, which is gonna come online in the next 24 months, 12 to 24 months. All in all, we see great momentum in terms of demand. We see great potential for us to continue to fulfill the customer demand and continue to add to our revenue backlog and continue to, you know, deliver in terms of results for the company.
Yeah. I mean, one of the questions that came up coming out of the call was just tied to, it sounds like, a very specific customer environment and some delays in that environment. Can you just help frame what's happening there? It sounds like you're confident that while that might have been initially expected in Q4, that's still up and running in Q1. The visibility you have into that ramp, and if there are metrics, whether it's CIP or some of the other disclosures you provide, metrics investors can track to kind of hold you honest to that timeline.
Absolutely. Before we get into the details there, one thing I wanna do acknowledge is, like, the scale of what we are building is unprecedented. We added about 120 MW of capacity, active power capacity, to take our total active power capacity to about 590 MW last quarter alone. The scale of what's being built for AI infrastructure is massive and unprecedented, which has put a lot of supply constraints and challenges on the overall industry. We've talked about this a few times where we do expect the supply constraint environment to continue for an extended period of time. The way this has impacted us is in form of a single data center provider that is delayed on, you know, on one of their data center deliveries by a few weeks. It's associated with construction-related delays and some weather-related delays that this provider kind of ran into.
The way we responded to it when we came to know about it in Q3 is we've put our boots on the ground in terms of helping that provider get across the finish line. We feel very confident in terms of our ability to now deliver with this partner. We've talked about this. What you would see in terms of this delay is most of the CapEx that we were forecasting that we would, you know, deploy in Q4 is going to be deployed by the end of Q1. As you see, we added about $2.8 billion in construction in progress in the last quarter. We'll add some more this quarter as we build up for the infrastructure to be deployed. We feel very comfortable and confident about that most of this delay is gonna be caught up by the end of Q1.
Does that at all impact the cadence at which you're able to bring on new customers? Because I think there are now questions that we're getting just around if capacity bottlenecks show up, does that kind of slow the market down in any way?
Absolutely. From a demand perspective, demand is continuously outstripping the supply that we can bring to the market. Our customers want more and larger capacity chunks from us faster. We're responding to it. In this particular case, this incident was associated with a single provider. CoreWeave is, you know, time and again proven itself to be the first in market in terms of delivering the fastest and the most performant infrastructure for these kind of workloads. We were the first ones to market with the H100s at scale. We were the first ones to market with H200s, GB200s, and now GB300s. Over and over again, we've delivered for our customers. This was related to a particular incident with a particular provider. However, we do continue to be in a capacity-constrained environment, and we're deploying capacity as fast as we can.
As I said, like, we've added 600 MW of capacity in terms of our secure power to 2.9 GW last quarter. We've added 500, you know, we have now 590 MW of approximately active power by the end of Q3. By the end of the year, we are gonna be greater than 850 MW of active power. We are bringing capacity as fast as we can for our customers online.
Can you speak to what you see as the bottlenecks in terms of what you're delivering? You've mentioned power a couple of times. I think there are varying opinions on this as well. Is it GPU supply? Is it power capacity? Where are the various potential bottlenecks, and what does CoreWeave do to stay ahead of those where you can?
Absolutely. From a supply-constrained perspective, the biggest supply constraint that we carry today is power shell capacity, which is putting together Legos for having a functional data center, which is energized and has power and can deliver power to a rack. Once we have the capacity, power shell capacity delivered to us, we are the fastest and the best in the market to deliver that infrastructure to our end customers, which is the most performant infrastructure that's out there in the market. You know, just recently, SemiAnalysis put us as the only platinum cloud provider in the AI infrastructure bucket twice in a row. Twice in a row, and they looked extensively across the entire hyperscaler space as well as the NeoCloud space to figure out, you know, where the technology differentiation is. We stood out again, yet again, as the only platinum in that category.
We feel incredibly proud of that accomplishment in terms of what our software platform has accomplished. We continue to deliver that for the customers. In terms of the constraints, the single biggest constraint that we continue to see in the market is power shell capacity.
Yes.
What we have done to help around that is to diversify our portfolio with the data center providers. We talked a little bit about this in our earnings last week. Today, no provider from our data center perspective is greater than 20%, approximately 20% of our capacity from a data center footprint perspective. In addition, we have also engaged into some of our self-builds where technology differentiation is, you know, involved and is required for us to be engaged in. You know, we have announced two of our self-builds, one in Kenilworth, one in Lancaster, that we are undertaking and building that infrastructure muscle for ourselves to help our providers where needed, which is a great example of what we are doing right now with the data center provider that is delayed.
Is that self-building shift at all? Is that full forward in terms of where your strategy was eventually going to go? Or maybe just speak to why that's important for CoreWeave.
Yeah. I think, look, from a mixed perspective, we'll have a healthy mix of what we consider as, you know, leasing data centers from third-party providers to also having some self-builds. It's important for us to have the capability model associated with us knowing where the technology is going. We were, we identified liquid cooling is going to be the future of the data center long before, you know, GBs were in consideration. And the majority of our portfolio that's coming online is liquid-cooled. It's important for us to continue to be ahead of that curve and understand, because we are the largest deployer of this capacity at scale for our customers.
We are able to see and work with the technologists in those companies, which are these leading AI innovators, to understand what the problems are and where the industry is heading to be able to solve those problems before they actually become problems. It is important for us to have a healthy mix of that and to build certain of those skill sets as we shape where the future of this industry is headed.
Yeah. I wanna just give you a chance to go on the record with your current views on some of the key debates that are happening across the AI world and AI infrastructure specifically. Because these are probably questions you've gotten often, right? I think your perspective is incredibly valuable for those in the room or those who are maybe tuning in. The first question is always just, how do we know we're not overbuilding, right? You see all these headlines suggesting there's a bubble forming around AI-related infrastructure. You've hit on demand and pipeline in a few different ways. You've talked about the types of customers you have and the pedigree of those customers, I think, is generally well-known.
what's your overall response to what you're seeing and how you know that it's not overbuilding or something like that sort of happening in real time right now?
No, absolutely. Look, I appreciate and I understand the concerns that are floating around. The numbers that are being talked about are gigantic. It does warrant to take a pause and understand what's happening behind those numbers. From the vantage point of, you know, from where I sit, it's important for me to disseminate the noise from what's actually happening at an end customer level. If you look at our customer profile, the use cases and the revenue associated with those is real, which basically gives you confidence in terms of the demand that's floating through. One of the key things, you know, we heard your feedback very loud and clear when we IPO'd earlier this year was around customer concentration. You know, we announced this in our Q3 earnings that no customer from a revenue backlog perspective is greater than 35% of our revenue backlog today.
Right.
Approximately 35%, which is materially down from last quarter, which was 50%, and significantly down from earlier this year, which was 85%. In addition, you know, greater than 60% of our revenue backlog is associated with investment-grade customers. So we are being very thoughtful in how we build the Legos and the foundation of this business on the back of highly credit-pro, you know, worthy customers on a diversified base. The use cases that we see that our customers are deploying and monetizing are really encouraging. When you look at the growth of some of the AI labs and the revenue behind those AI labs, that's real. When you look at the use cases that are popping up on the enterprise side, whether it is large mega-cap companies like Google or, you know, Meta, those are real ROI use cases that they're deploying in their production environments.
What we are also seeing is a proliferation of use cases across industry verticals for AI, and that's very encouraging for us because it represents monetization of the AI infrastructure that people have deployed. These are still early offshoots, but it's highly encouraging for us to see that enterprise adoption is in, you know, it's accelerating, and there are offshoots associated with that happening in the industry. All in all, what you're gonna see is an unprecedented scale of what we are building to come forth over the next, you know, few years because the magnitude of the change that's happening in the industry and the transformation that's happening, it's every single industry and every single workflow in all of those industries that is getting manifested and disrupted by AI.
Yeah. Can you just speak to what you're seeing in terms of customer expansion? Because you're talking about large customers, but I think on the last earnings call, you even mentioned a hyperscaler has amended or extended their contract now 6x .
That's correct.
With CoreWeave. What drives that type of expansion in a fairly abbreviated cadence? Maybe if you could jump off into technology differentiation and the reasons why it's CoreWeave specifically that you're seeing that type of deal cadence with some of these customer types.
100%. One incremental thing we talked a little bit about in the last earnings call was that, you know, nine out of the ten largest customers that we have by revenue backlog have signed multiple contracts with us. The only exception, you know, being a customer that we onboarded in Q3. Give it a couple of quarters, we'll make it 10 out of 10. The reason why that happens is customers join our platform and they identify that the performance of the same GPU is fundamentally different on the CoreWeave platform than what they can experience on other platforms that are available. The key differentiator there is our software stack. From custom-build for what AI workloads need to now going upstack in terms of where we are developing. We also announced that, you know, while relatively small, our storage business crossed $100 million ARR in Q3.
That's a, you know, that's a true testament to CoreWeave developing as a full-stack platform for the AI developers in addition to being a compute platform for us. What we're also seeing is we're very proud of the technological innovations that we've made on our platform to develop ourselves as a full-stack platform organically. You know, storage is a great example. There wasn't a storage solution that was available for, you know, object storage for AI workloads. CoreWeave responded with CoreWeave AI Object Storage, which we announced, and it has, you know, storage as a portfolio has achieved $100 million ARR for us. Also inorganically through our acquisitions, you know, Weights & Biases, Monolith, Marino, OpenPipe, we're incrementally adding capabilities on our platform that allow us to build a holistic platform.
For instance, if you look at OpenPipe, you know, with that we launched our serverless RL, you know, platform, which allows customers to fine-tune, you know, on our platform. With the Marino, we are entering into what you would call as the developer community where they can start from experimentation to deployment. With Monolith, we're now entering into industrial use cases, with Stellantis and Nissan as pioneer customers for that platform, in which case you're actually seeing AI deployments happen for industrial use cases. Customers come to our platform, they understand and see what the differentiation is associated with performing the most performant and challenging workloads, and they continue to expand with us. This is a pattern, and this is a behavior we've seen time and again with our end customers. That's what the strength of the CoreWeave platform is.
I'm curious if you've seen a change just even since the IPO until now in terms of market awareness around some of the things that you're highlighting. If some of the referenceability from some of the customers, some of the additions to the software platform, just broader market recognition, some of the studies you've talked about, if you find that those are catching a bit more, how would you compare and contrast market awareness around CoreWeave start of the year around the IPO versus where it sits today from your perspective?
Absolutely. If you look at the broader industry, the pricing for in the industry today is mostly by dollar per hour per GPU. You know, when you have a pricing metric like that, it does not differentiate between one platform to the other. Once you experience the platform, you start recognizing, you know, an H100, you know, on a CoreWeave platform is fundamentally different from an H100 on some other platform. We talked a little bit about this in our last earnings call as well. One of the first large H100 contracts that we wrote was coming up for renewal in a couple of quarters. The customer proactively reached out to us and recontracted that H100. It was a 10,000-plus GPU cluster at an ASP within 5% of the original deal price that we had with that customer.
That's a testament to the value that the customers find of those infrastructures on our platform. That's a differentiation that the market has continuously started to recognize. A few things that have happened for us since the IPO is, you know, more and more customers are now repeat customers for us. More and more customers recognize the value of the GPUs on our platform relative to other platforms. Customers are now recognizing us as a full-stack AI platform rather than just a compute infrastructure platform. All three vectors have changed the perception and the scale of how customers are engaging with us.
I wanna give you a chance to expound on that last point because that kinda hits on another key investor debate, which is just pricing dynamics in the market and overall depreciation cycles, right? You see this continued question around, is six years the right number? I thought the stat you gave on the, the call was very useful and the 95% holding after three years. What's your perspective on depreciation cycles, what you're seeing, and maybe how reflective you think that large H100 cluster contract is reflective of the overall market right now?
Yeah. Last quarter, we did write contracts for our prior generation SKUs as well in addition to the hoppers, whether it's the Amperes, whether it's the L40s, so on and so forth. One thing that has changed in the industry, which we held a very strong belief from day one, is that the infrastructure that we are building is AI infrastructure. It's not differentiated for training versus inference because customers continue to use that in a fungible manner. The second piece is that most customers do not require the latest and greatest technology for all of their use cases. A particular example of that is, as you think about, you know, earlier when you had to put a query for, let's say, in ChatGPT, you had to choose which model you are going to use for that query.
Now, based on the query that you're using or what the query is presented to ChatGPT, it chooses the model that it's going to deploy. Not all models run on the same infrastructure. That's the beauty of how use cases are evolving in the industry. What we are also seeing is a large bunch of developers and, you know, AI developers, as they are researchers, as they're thinking about the platform development, they are trying to use and deploy technologies that they are used to. They're not going to the Blackwells. They're like, the learning curve for us to go through a Blackwell is just not worth it. We'll get to a point where its reasonable viability has been achieved.
If we need a more performant infrastructure, we'll go to the Blackwell, but we'd rather have our researchers work with Hoppers that they are familiar with and they know how to work around those. The third thing that we are also seeing in the market is increasingly larger contracts for longer duration. What used to be the conversations with customers, you know, late last year, early this year, for somewhere between three- to four-year customer contracts is now expanding to five- to six-year contracts as a norm. You know, with one large customer, we signed a five-year contract with two one-year extension options baked into it. What we are seeing is customers are looking for more exposure and longer exposure to these chips, not shorter.
We continue to see very robust demand on our platform for not just the latest and greatest generation of infrastructure, but also for, you know, prior generation, which is very performant on our platform.
Yeah. And that fact that contracts are extending six years, it seems like it's well mapped to depreciation cycles at the very least. Another key question that we field often is just around the financing strategy, right? This is a fast-moving market. CoreWeave is competing with some of the largest scaled players in all of the world. There is a debt component to your ability to provision these environments. Can you speak to why that's the right strategy from the CoreWeave perspective?
Absolutely.
What gives you comfort in kind of taking that upfront, what you're seeing in terms of a cost-to-capital interest rate perspective as you evolve and how we should expect this to progress?
Absolutely. When you think about our financing strategy, it is structured based on the backbone of that is basically our long-term take-or-pay customer contracts. The way we structure our debt is fundamentally around what you would call a success-based CapEx. The GPU CapEx, which is the large fraction of the CapEx we deploy in the company, is fundamentally success-based. What that means is we purchase GPUs and deploy those GPUs when we have a signed long-term committed customer contract behind that. This allows us to finance those GPUs in a secured facility, which allows us to amortize, pay the interest associated with this debt within the four walls of the customer contract, with excess cash flows from those contracts being kicked back up to the parent.
This is a very risk-managed way for us to deploy and scale our CapEx in a very equity investor-friendly manner where the excess returns are basically passed on to the end investors, equity investors, while the debt remains structured, backed by these long-term customer take-or-pay contracts as the first line of defense, and then the GPUs themselves as a second line of defense associated with that. What we have been able to do is to, with this structure, not only scale our business to where it is today and continue to scale it where it is headed in the next couple of years, but also significantly reduce our cost of capital.
You know, we amended in Q3, we amended our DDTL 2 late-draw term loan facility, second-to facility to add a $400 million tranche to create an overall $3 billion tranche at SOFR plus 425, I believe is the number, which was significantly lower than the original DDTL 2 facility. We also completed our DDTL 3 facility, which was done at SOFR plus 400, which was a 900 basis points decline over a similar non-rated customer in the prior facility. What you're continuously seeing us do is compress that spread for cost of capital as the market becomes more comfortable with CoreWeave's infrastructure product, our ability to deliver, and the quality of the customer contracts against which we deliver.
Does the structure of the contracts, and just the scale of some of the contracts you're seeing, just concentrate the types of customers that you're able to go after? Are there ways that you're looking to diversify? The reason I ask the question is because you brought on a Chief Revenue Officer. We saw the NVIDIA deal and have talked a little bit about that. You talk about the AI labs as customers alongside the OpenAIs, the Microsofts, the Metas of the world. How do you diversify and how near or longer term a strategic focus is that for CoreWeave?
Absolutely. That's a high strategic focus for CoreWeave in terms of continuously able to modify and, you know, diversify our customer base. By being a public company and being able to access public markets as well as continue to build scale on our platform, we're able to support some of these use cases that earlier we were not able to as we were building the platform. In addition, you know, we've talked about the NVIDIA deal, which is a fundamental construct for us being able to bring on companies and, you know, research labs on our platform, which are not yet in a position to make long-term commitments to this infrastructure.
It is a very responsible way for us to allow for this infrastructure to be used by these potential large future customers who today are not in a position to solve or buy or commit for long-term committed customer contracts. Our NVIDIA contract that we announced last quarter is fully backed by NVIDIA. It is a take-or-pay customer contract with NVIDIA with one differentiation, which allows us to interrupt that contract and take any amount of capacity for any duration for such small customer use cases. That allows us to continue to expand in these markets, which earlier we were not able to.
Both factors combined with these kind of structures, as well as our ability to continuously get, you know, through our own cash flows, be more independent in terms of our own credit rating to be able to support these customer use cases is how we are looking to diversify.
Yeah. That's super interesting. I think we can get a couple more in and then we're running out of time. Training versus inference. The market in some conversations we have seems to view these as more discrete, right? Like use a cluster for training, move to inference. You've mentioned that you see customers doing a mix of both. I'm curious whether you view those different workload purposes as discrete or if there's an advantage to kind of powering the world's largest training clusters for them, bringing inference-related workloads to those customers or how you're thinking about the evolution between mix of training and inference as the market matures.
Absolutely. A large fraction of what, you know, workloads that run on our platform are also inference workloads, you know, based on what we can infer from the power draw, you know, statistics of the clusters. Oftentimes we see customers fungibly move their workloads on our platform where certain clusters at a certain point in time are used for training and then later on they're used for inference. More and more of our customer base, as their use cases evolve, as inference becomes multinodal, they're seeing that the differentiation between training and inference workloads is, you know, kind of collapsing. More and more of our customers are asking for similar infrastructure, which looks very similar for training as well as for inference.
The beauty of our software stack allows our customers to fungibly move and seamlessly move training workloads from inference from the same cluster and back and forth. The power of our platform allows for those customers to do that. We see more and more of our customer base doing that on a very regular basis. From our perspective, we're not building a training workload, you know, infrastructure. We're not building an infrastructure inference workload infrastructure. What we are building is AI infrastructure that serves both purposes equally for our end customers. That's something that they truly appreciate of our platform.
Okay. That's great. I've heard the phrase fungible fleet a lot recently. It seems like there's some commonality there. We just have a couple minutes left, Nitin, and I wanna kind of cede the floor to you. You know, we heard the earnings results Monday of last week. We've seen the stock performance lately and there are a lot of questions. We appreciate you being here. You'll be fielding investor questions here throughout the day. What is the market missing? What should be the key takeaways now coming out of CoreWeave's most recent earnings report? When we're here in three years talking about CoreWeave, what will that conversation be focused on?
Yeah. Let me take the second part of the question first. When you look at like three years out, what we'd be talking about is the proliferation of use cases, not just in a particular industry or for a particular use case, but broad-based use cases across multiple industries. CoreWeave's software layer is enabling all of those to participate and to accelerate those developments. That's the platform that we are building because we started from compute like every cloud does, and now we are basically becoming a full platform for all of these use cases, both organically as well as inorganically.
When you'll, you know, if you're talking three years down, the conversations would be how the CoreWeave platform and the entire stack is actually enabling these use cases to be accelerated and customers to generate value out of the AI workloads that they have at a massive scale. The conversation from infrastructure are going to look very different three years out because it's going to be more of a platform and acceleration of use case conversation. You know, broadly in terms of the market, look, the scale of what's happening in the market is unprecedented. Probably the world has not seen such a large technology innovation since the advent of electricity. Folks are readjusting to the market information that comes real-time and oftentimes conflicting, and the scale of what's coming through to them is challenging.
There are going to be periods where the world is going to go in from a market perspective in one direction versus the other. Our job is to keep our head down, to service our end customers, and to look at the customer demand and fulfill it in the most business-responsible manner with the most innovative technology solutions. That is what we do best, and that is what we are going to continue to focus on.
Yeah. That's a great note to close on, Nitin. That was great. Appreciate you joining. Have a good couple of days at the conference hall. Thanks very much.
Thank you so much for having me.