Welcome back to AI Week. We are excited to have CoreWeave with us today, Nitin, CFO, and Deborah in Investor Relations. For those of you that do not know the team, Deborah has spent over 10 years at Meta, running IR, and then went to Fanatics and joined CoreWeave prior to the IPO and led it through a very successful IPO. Thank you, Deborah, for setting this up. Nitin, CFO, he served as VP of Finance at Google Cloud and held other CFO and financial roles at numerous companies, including Amazon, and spent a lot of time at Microsoft. Thanks, Nitin and Deborah, for your time today. I guess maybe just to start off, Nitin, what you and I were talking about before we went on about this morning and the announcement with the new transaction, maybe if we could just start there and your thoughts on that win.
Congrats on that.
Thank you so much, Brent, and thank you so much for having us today. I look forward to having this dialogue, continued dialogue. Look, we're incredibly excited. We've made a couple of announcements this mid-quarter, the NVIDIA deal, and then the deal earlier this morning with OpenAI. We continue to solidify our position as the leading AI provider for this infrastructure to the most demanding and the most critical pieces of the puzzle in the AI landscape. This deal is also a testament to that and continues to solidify our position there. OpenAI is a great partner for us. As we described in the press release this morning, our total commitments for them are north of $22 billion today and expanding. We look forward to kind of continuing to work with them as a great customer and partner throughout this journey.
That's great. Just when you think about it at the IPO, it felt many were kind of misunderstanding the capacity needs. What do you think investors are misunderstanding still on CoreWeave? What have they gained clarity on?
Yeah, that's a great question, Brent. When we went public during the early part of this year, the misconceptions or the things that were not very well understood in the market were fundamentally around a few things. First is kind of this demand tidal wave, as you would call it, and how sustained is that demand curve. The few pieces alongside with that were how sustained are your unit economics and how sustained are your customer relationships. Since our IPO, I think all of them have gained incremental material clarity. If you look at the demand side, it's not just real demand. You see from the most recent announcements from us as well as from the broader industry that demand is actually accelerating. It's a clear signal of the ham of the market and how quickly that is expanding.
We talked a little bit about this in our earnings call last quarter, where we talked about where earlier the conversations we used to have with customers were eventually getting them to tens of megawatts of scale in terms of capacity requirements, which then changed into hundreds of megawatts. Now the conversations we are having with customers are how to help them scale to gigawatt plus of scale. The other piece around customer diversification was one that was talked about a lot during our IPO process. You have a single customer. That customer was making some statements around the press around their needs and kind of the fulfillment of their needs in terms of the capacity. Since then, we've made a concerted effort in terms of diversification of our customers.
We do not have our revenue backlog today with any customer at more than 50% of our revenue backlog. We have kind of continued to focus and diversify our customer base. The other piece we are also noticing is beyond these mega giant consumers of infrastructure capacity today, new use cases, both in the enterprise side as well as some on the AI labs and smaller enterprise and smaller customer side, are also evolving. We talked a little bit about this in our Q2 earnings, Moon Valley, a frontier developing model with VFX product, Hippocratic AI, advancing healthcare to AI-first, safety-first AI applications, British Telecom showing an adoption for telco. All of these momentums, while small, but they are great offshoots in broader adoption of the AI demand infrastructure demand.
Around the durability of our model, I think we've demonstrated through repeated execution that our model is not just durable. It is actually kind of scalable over time as we continue to expand and write these large contracts with our customers. One thing that is becoming more evident over the last few quarters or so is our earlier customers used to ask for contract lengths of somewhere between three to five years. That need and that ask is expanding in terms of customers now going for five to six years of committed customer contracts. We're continuing to see that. In addition, we see very strong, healthy economics continue to be sustained for infrastructure, which are prior generations. As we think about the new generations in Blackwell and then going forward, where Rubin comes along, we're still seeing Hoppers and Amperes hold their value pretty significantly.
Some of our material contracts earlier this year were on the Hopper series. We continue to see that over a period of time. Those are factors which I think were relatively well understood now versus what we thought they were at the time of our IPO.
That's great. When you think about the capacity constraints persisting with demand, outrunning supply, what are the biggest constraints in the market you're seeing today? Power, chips, et cetera? How would you characterize what you're seeing?
We've been very consistent in our messaging around what we see in the market, which is demand continues to far outstrip supply. This is a structurally supply constrained environment where no near-term solutions are possible for those supply constraints. The biggest constraint for us continues to be power shell capacity, which is power data centers. That is something that we've been incredibly focused on in terms of getting incremental capacity and bringing that capacity to bear as quickly as possible. As of the end of Q2, we had a contracted portfolio of 2.2 GW of power. That is going to come online from now through sometime in 2027. Our current active power footprint at the end of Q2 was 470 MW. We talked about by the end of the year, we are going to be north of 900 MW of active power.
In addition to us working with multiple data center providers to continue to expand this portfolio for us, we recently also announced our acquisition of CoreSite, which adds incremental 1.3 GW of power to our portfolio and gives us expansion options on some of their sites. We continue to look in innovative and creative ways in the industry to expand our power footprint and accelerate it at the same point in time.
The unit economics, both you and I have been hammered by investors on. For those that aren't as familiar, just help explain what's going on. I think, Nitin, when you start to see the $300 billion deal with OpenAI and Oracle, you see the magnitude of these transactions. We're all getting the questions of, how does this pay back? What gives you conviction that this is a good economic model over the long haul?
Yeah, absolutely. We have been very thoughtful, diligent, and to grow this business in a very risk-mitigated manner. Last quarter, greater than 98% of our revenue came from reserved instances contracts, which are take-or-pay long-term committed customer contracts. Our committed customer contracts are generally between three to five years. Now they are extending in their term. Most customers are looking for five- to six-year terms. Those infrastructure builds are purely success-based CapEx for us, which effectively means that we only invest in the CapEx associated with those deals once we have a committed long-term customer deal on the other side of it. When we think about the confines of the contract that we have written with the customers, we are able to amortize and naturally de-lever the debt against those structures in that time period.
The combination of our success-based approach to CapEx, the fact that we are building the business on the back of long-term committed non-cancelable take-or-pay contracts alongside with the sustained unit economics that we continue to maintain in the business, you feel very comfortable about the longevity of the model. One of our stated goals at the type of IPO were both to increase our depth of capital pool from move away from private credit to public debt markets and also, as a result of it, significantly improve our cost of capital. You saw that both being delivered in terms of the two high-yield debt offerings that we had over the summer.
You also saw that in terms of the closing of the secured contract financing that we just secured with our DDTL3 facility, which, by the way, was done at an interest rate reduction of 900 basis points at SOFR plus 400 for a non-investment-grade customer. This gives us a dramatic increase in our scope in terms of where we can get the capital from, as well as materially reducing our cost of capital. All of these give us enough flexibility in how we think about customer diversification and broadening how we think about terms with our customer contracts. The three things or four things that we consider as we think about customer contracts is length and duration of the contract, the upfront payment associated with it, margin on the contract, as well as the cost structure alongside with the strategic nature.
Lowering our cost of capital continues to give us more power and ammunition in terms of us being flexible in those dimensions.
With improvements in your cost of capital, are you now in a better position to pursue downstream opportunities, or is that motion still early?
No, absolutely. Look, with the improvements that we've seen, rather dramatic one in terms of our cost of capital, we're definitely in a stronger position to continue to invest for growth. Now that we are public, we also have liquid currency in terms of our equity, which also gives us incremental flexibility in terms of how we pursue opportunities. With that said, we continue to be customer-led in terms of how we invest both upstack and down the stack. You've seen that in terms of execution from us. As you think about the two transactions that we've executed, the Weights & Biases acquisition, as well as the OpenPipe acquisition, where it expands our stack and it deepens and accelerates customers' adoption of AI workloads. You also see that in terms of our proposed acquisition of Core Scientific, where we saw an opportunity to do two things at the same time.
One, to get better operational control over the most constrained resource, which is power shell capacity. We want to get tighter operational control over those assets, as well as an opportunity for us to save costs around that. We have talked about about $500 million of run rate cost savings as a part of this acquisition by the end of 2027. We continue to work our way through using the tools that are now available to us as a public company to be very thoughtful and diligent in how we expand our business.
Just on NVIDIA, you mentioned there is a very large commitment there from them. Can you just help everyone understand what they are doing? Any more details about this? Any you can add would be great. If not, no big deal.
Yeah, sure. Now, for those who are not familiar, we filed an 8K, I think it was two weeks ago around that time frame, disclosing an update to our existing agreement with NVIDIA, stating that they signed a new order form with us worth up to $6.3 billion. Structurally, it looks pretty much the same as any long-term take or pay contract that we use to finance GPUs against. The key distinction in this contract is the optionality and the call option on interruptibility, where we can redirect this capacity anytime during the six-year term anywhere else we see demand. Now, there will come a point in time in the future. We'll talk, we and NVIDIA will likely explain this contract in more detail to the market.
At this point of time, what I leave it as is highlighting the flexibility that allows us to support startups and small, mid-sized companies with the same large-scale data center capacity that would typically require a four to six-year contract commitment. Effectively, if I kind of summarize this in short, it gives us a call option on this infrastructure so that we can sell to startups and smaller companies for a shorter period of time or leave NVIDIA as a customer by default for this infrastructure.
Just turning to international markets, this is not a U.S.-based push. It's also a global push. You committed, I believe, over GBP 1.5 billion to the U.K. What are you thinking about the broader opportunity here?
Yeah, look, our approach towards international markets is very similar to how we think about the world in the U.S., which is we're customer-led. We've been very methodical around our international expansion. We tailor it to where we see the most demand for AI compute, where customers, based on the customer signal, where they remain highly constrained. We began expanding in Europe in early 2024. We've added incremental capacity in Europe since then based on the customer demand signal. This recent U.K. commitment is a clear example of us continuing down that path on strategy and bringing our total investment in the country to about GBP 2.5 billion. The investment is designed to power the next wave of AI innovation by building facilities that prioritize sustainability and environmental responsibility. As a part of this investment, we're also partnering with NVIDIA and DataVita in Scotland to deploy NVIDIA Blackwell GPUs.
As we think about planning for additional sovereign AI deployments with NVIDIA GB300 GPUs and NVIDIA RTX, we consider those opportunities as and when there is a true customer demand signal.
There have been an overwhelming number of questions back to us in just the concentration and what's happening with some of these transactions. You look at the size is getting bigger, the frequency is growing. I mean, I think it's hard for investors to digest kind of where we're at. I know you've said, hey, we're continuing to see demand outstrip supply. I think the question around just the sizing of these transactions seems to be concentrated among a few. Everyone asks, ok, where's the rest of the world? Where are the rest of the companies? Is everyone else going to jump in, or is this going to be run by five companies on the planet? How do you think about that?
Yeah. Look, the most recent deals, both from us as well as from other participants in the market, are a very clear indication of where the TAM for this market sits today. Both Microsoft and OpenAI are two very important partners and customers for us. We're honored to work with them. We've been kind of delivering for their infrastructure needs. We continue to be excited to partner and continue our engagement with that. With that said, we also see demand proliferation that is happening across the ecosystem.
When you think about from other leading AI labs as well as hyperscalers to other small enterprises and other smaller companies, like we talked a little bit about this in our Q2 earnings, where Mike highlighted where he's seeing offshoots of demand kind of come across from companies like Hippocratic AI, Moonv alley, enterprises like British Telecom, and so on and so forth. Today, if you look at the volume of demand, while it is concentrated to a few leading AI labs and enterprises, many of which we call as our customers and are excited to serve them, our pipeline for potential customers is meaningfully larger today than what it was like earlier in the year. This is a result of what we see is productization of AI, inference, including agentic workloads.
There is a lot of demand at the smaller scale today, as it will eventually, once this monetization starts happening, will convert into long-term committed demand. Some of the structures that we are putting in place, like the NVIDIA deal that we just talked about, allow us to serve a much deeper universe of customers to provide that balance and continued growth in the ecosystem.
The question of ROI, Nitin, keeps coming up. What is the underlying conviction for the ROI? We are hosting 45 companies this week at AI Week. I think everyone that has spoken, from infrastructure to apps to security, has talked about if this actually works the way they think it works, the economic value to our planet is incredible. I think everyone is accurately pushing hard on, is the ROI going to be there? I guess maybe, is there anything that you have seen along the way that has given you conviction in this ROI path?
Yeah, no, absolutely. I think that's the right question to be kind of asking in terms of any investments that you make in any new technology on where does the ROI come from. I think we see it in both dimensions. We see it in the dimension of our large-scale customers who continue to see compute as a constraint in terms of them generating additional revenue. OpenAI talked about this earlier this week, where more compute for them clearly means more revenue. We're also seeing that in the smaller customer set that we just talked about, which is now kind of evolving and developing, because that is truly inference kind of monetization and productization of AI, which effectively is driving that demand.
Today, from a scale perspective, that is relatively small as compared to these large gigantic pools of demand that exist with the hyperscaler community as well as the large AI labs. Those are really promising. They are delivering value to the end markets and customers. We see very fast proliferation in terms of number of those use cases that are popping up in our pipeline now. We remain incredibly positive and incredibly bullish around the opportunity scale here. We continue to invest both in our platform as well as in our customer relationships around those.
Just from a perspective on power, there's been a lot of questions. Training these models is projected to require data centers with 5 GW of capacity by 2028, with your contracted power last quarter being less than half of that. How should we think about the ramp up and build out?
No, absolutely. Look, this is a structurally supply-constrained market with no near-term easy levers to alleviate that concern. As we've repeatedly said, our biggest constraint today is power shell capacity, including transformers, power generation, and everything that takes to energize a data center. We have 2.2 GW of contracted power capacity at the end of Q2. As you've seen, we continuously add to our portfolio with a variety of providers, diversifying our footprint, including greenfield projects like the one we announced in Lancaster, PA. It's a $6 billion investment in Kenilworth, N.J. , which is a joint venture with Blue Owl. In addition, the proposed acquisition of Core Scientific would add about 1.3 GW of gross power with an added increment of a gigawatt or so for expansion, which, again, enhances our flexibility in terms of us taking additional projects to meet accelerating customer demand.
We are expanding our footprint internationally, as you've seen from our expansion in Europe and so on and so forth. As the demand continues to intensify, you'll continue to see more projects from us as we attempt to keep pace with the demands of the customers in this space. We do this in a very methodical manner, in a disciplined manner, in a risk-mitigated manner. That has been the core building block of how we've built CoreWeave up until now and will continue to execute on it.
There's a handful of questions. We've gone just on this whole concept of useful life and investors' concerns about a six-year useful life for assumptions for GPUs when we've seen useful lives on a lot of other technologies a lot lower. What's your response to that?
Yeah. Look, we feel very comfortable around the six-year useful life, which is the industry standard today. From our perspective, as we said, we write long-term committed take-or-pay customer contracts. The spot pricing variability for some of the infrastructure that may vary with demand and supply kind of does not really impact our economics. As we think about the older or the first contracted Amperes and Hoppers , as they come offline and they are recontracted, the pricing for those is holding pretty stable. As you think about the infrastructure build, it is not going to be one size fits all from an infrastructure standpoint, which is what we see as demand from our customers. Many of our customers are continuing to re-sign and find tremendous value in prior generation GPUs. A great example of that is if you think about the GPT-5 launch.
Now the prompt itself decides which model to use, not just the greatest and the biggest and the most high-powered model, in which case many of those models that are serving some of the simpler queries may actually be more fine-tuned and optimized to work on an Ampere or a Hopper. That is where we continue to see those demands for inference workloads for many of the prior generation infrastructure still continuing to be strong. The other piece that we are also noticing in the industry as a trend is that the duration of the contract is now expanding. What used to be three to five years as long-term committed customer contracts is clearly moving in the longer duration customer contracts. We are now signing more five and six-year contracts.
That is definitely something that the industry is changing towards as they see the demand for this infrastructure and the scarcity of this infrastructure continue to extend way out in the future. We feel very comfortable in our ability to not just use the GPUs for six years, but perhaps even more than that. We are not counting that in our economic model today, but we feel very comfortable about that life outside that.
The other investor questions, kind of these are thematic questions we're getting. It's just in some of these larger contracts, if they can't raise the funding, therefore cannot fulfill their commitments to you, what recourse do you have?
Yeah. Look, from our customer contract perspective, we base our deployments only on a success basis. When we have a long-term committed customer contract, with the customer's ability to fund that contract is when we go ahead and deploy those. That is a critical portion of our financing strategy. We will not be able to finance those GPUs if there is contingency around customer not being able to pay for those, which is why we've been very diligent and thoughtful around how we think about risk mitigation through long-term commitments, upfront payments with smaller customers, and so on and so forth. Those are bedrocks of how we have built this business and we continue to build alongside with our customers.
We are very thoughtful when we deploy capital that we have a clear line of sight of being able to finance that capital as well as customer recoverability from a contract perspective.
Yeah, we're not asking for your special magic wand of forecasting. Everyone says your job is hard to figure out these long-term prices and then the components and everything that has to go into that. The question is, at a high level, how are you doing this? How does this work when you've got so many ingredients you have to come up with? Let's assume one component jumps 50% in price. How are you ensuring that you've got enough flexibility to run this the way that you want to run it?
Absolutely. Pieces that are outside our control, for example, like tariffs. In most of our contracts, we have a tariff pass-through cost clause where we allow customers, like customers are allowed or we are allowed to pass through those costs to our end customers. Once we are deploying the capacity, that is when we are writing these long-term committed customer contracts. The cost vectors are fairly well known to us at the time when we are deploying the capacity, which basically goes as input in terms of how we think about contract pricing. Once the contract is done, the operating cost of that infrastructure on a fixed basis over a period of time is fairly flattish, similar to our revenue construct. Revenue is kind of ratable over the life of the contract.
It gives us a very, as we are deploying the capacity, it gives us a very clear indication on what the economics on a sustained basis on any contract are going to look like over the life of the contract, which is where we can deploy this capacity with conviction and we can finance this capacity with conviction with our lenders.
Nitin, on the software business, this is kind of near and dear to my heart, which is it feels like over time you have a great opportunity to become a much larger software story. I know there have been some acquisitions that are helping you get there. Can you just walk through the long-term strategy with Weights & Biases? You mentioned the OpenPipe acquisition. Strategically, how you are thinking about the software business inside CoreWeave?
Absolutely. That remains as a very critical vector for us to scale our business for the long term. We're building the AI platform here. We want to be close to our customers to understand where in the software stack they see value from an AI developer standpoint. When you think about it, our strategy remains both organic as well as inorganic in terms of building that portfolio for us. Organically, we work very closely with the engineering teams on these labs and enterprises that are deploying this capacity at scale where we get to learn what are the key pain points that these customers have and build our portfolio and platform to serve for those. Inorganically, for example, Weights & Biases is a great example of how we've brought the teams together.
Since the integration of Weights & Biases, we've launched three new products, which we announced in our last earnings call as well. We integrated Weights & Biases into Mission Control for CoreWeave for observability. The Weights & Biases inference products that allows them to control over how they're using compute from a customer perspective. This clarity is an incredible differentiator. Weave allows folks to optimize the way their GPUs work through the code in their models and are able to drive performance. Not only is the stack developing, but we are learning an incredible amount from the 1,600 or so direct customers that we brought on board from the Weights & Biases platform.
All of these customers are giving us valuable input and participating with us in terms of where their pain points are and how they want to see the AI software stack develop for accelerating and making their use of AI infrastructure more efficient. Brent, I'd love to continue the conversation, but I'm looking at the time. We're kind of a little bit over time here.
Yeah, we got to let you go. Thanks, Nitin, so much for joining. Thanks, Deborah, for your support of Jeffrey's AI Week. And thanks, everyone, for joining. Take care.
Thank you so much, Brent. Thank you for having us.