Good morning, everybody. I'm Brad Zelnick, Deutsche Bank Software Equity Research. Welcome to the 2025 Tech Conference. Really delighted to have everybody here in sunny Dana Point, California. On behalf of myself, my colleagues in research and investment banking, we're looking forward to a great next couple of days. Thanks, everybody, for coming. Given the relevance of AI across all sectors of tech, we thought it was fitting that we kick off with today's conversation with CoreWeave. Welcome to Nitin Agrawal, CFO of the company. With that format of this session, there'll be a fireside chat. I got a bunch of questions that we're going to dive right into. Nitin, thank you so much for joining.
Thank you so much for having me here. I'm super excited to be here.
Great. Let's dive on in with that. Before we get started, I just want to remind everybody, CoreWeave may be making forward-looking statements during today's chat. Actual results may vary materially from today's statements. Information concerning risk, uncertainties, and other factors that could cause these results to differ are included in CoreWeave's SEC filings, which I encourage you to all take a look at. Now we got that very important part out of the way.
Absolutely.
Nitin, maybe to kick it off, CoreWeave's now been a public company for less than six months and already garnered a ton of investor interest. For those not as familiar, can you just give us a quick background on CoreWeave, why you're so well positioned for this moment as demand for AI infrastructure seems insatiable at this time?
Thanks so much, Brad. CoreWeave is the only purpose-built cloud for AI workloads. We're not retrofitted, custom-fitted for anything that is legacy in terms of how it was built for general purpose workloads. AI workloads, which are parallelized workloads, are very different from serialized workloads for which the clouds back in the 2010s were built. We are built custom for AI workloads, which allows us to build the infrastructure layer, the networking layer, and the software layer, specifically custom-built, purpose-built for the AI workloads. That has allowed us to continuously scale with the demand for AI happening. In addition, building this infrastructure at the scale at which we're building requires a very sophisticated financing approach.
CoreWeave has pioneered some of those vehicles and has scaled those vehicles from an asset-backed facility and so on and so forth that allow the company to scale this infrastructure at the pace that is required for the AI workloads. CoreWeave is well positioned both from a technology execution as well as from a financing strategy standpoint to take advantage of the hypergrowth that's happening in the AI space at this moment. We're really excited to participate there.
It's an exciting time for sure. You guys are living on the bleeding edge. As people get comfortable with CoreWeave and its ability to deliver on that bleeding edge, the number one question that we get from investors is around competitive differentiation and the sustainability of your leadership looking out over multiple years. I just wanted to give you the opportunity to comment and frame CoreWeave's durable advantage for folks.
Absolutely. I think the single biggest point that stands out there is execution, from the technology platform to our scaling engine to our ability to finance. When you look at the partners that we've gathered, we've earned the trust of the AI pioneers as well as NVIDIA alike, where we were the first ones to deploy the Hopper technology, both the H100 and the H200s at scale. We're the first ones to deploy the Blackwell technology, GB200, to scale. This is a respect that we've earned in the industry by relentless execution, both from a technology perspective as well as from an operational scaling perspective. You look at it from an external perspective. Independent research parties have put us in the platinum bucket as the sole provider in the platinum bucket, including all the hyperscalers as well as all the neoclouds that exist today.
You look at our customer list, it includes some of the most demanding deployers and consumers of this AI infrastructure today, from pioneers in AI labs to AI hyperscalers that consume our capacity on our platform. In the last eight weeks or so, we've signed expansion contracts with both the hyperscaler customers that we have. That gives you a validation of this advantage is continuing to stand and not only stand, but actually expand over a period of time as technology to deploy this infrastructure gets more and more complicated.
Thank you. We continue to see these very strong proof points. In a similar vein, I think there's a good amount of debate about CoreWeave's positioning for training versus inferencing. I don't know that's necessarily the right way to think about the world. How would you compare and contrast your strategy versus the largest hyperscalers? It does feel like there are some key differences just in terms of what you're building and the customers that you're targeting.
Absolutely. I think the fundamental difference here is, look, we're not building training or inference infrastructure. We're building AI infrastructure. Our infrastructure is designed in a manner which works optimally for both training as well as for inference. Our software stack allows customers to use those interchangeably and fungibly during their lifecycle as they deem fit. The key important piece here is to recognize that we're not building individual infrastructure sets for training versus inference. What we are building is AI infrastructure sets, which are very useful for our customers.
What we are seeing from a demand perspective from our customers today is not only the original AI pioneers, the AI labs, we're seeing consumption from those, but we are also seeing consumption from AI enterprises that are adopting AI at scale, you know, from IBM to financial institutions like Morgan Stanley, Goldman Sachs, to Jane Street, to offshoots like BT Group, to new use cases in the industry from Hippocratic AI in the medical space to Moonvalley in the VFX space. We're seeing new use cases that are proliferating now on the inference. Inference, as all of us get excited about, is the monetization of the AI. We're seeing those use cases develop, and our platform is very well suited to serve both through that common platform, training as well as inference workloads.
I think it's an important point. Nitin, hand in hand with the strategy that you just outlined and the value proposition and broad applicability across different customer types, our contract and funding structures that are relatively unique in the industry, can you remind folks the mechanics of this and how it aligns with the business model?
Absolutely. One of the core principles that CoreWeave is scaling on is having revenue visibility and building our financing structure based on long-term committed customer contracts. Last quarter, we had 98% of our revenue coming from committed long-term customer contracts. When we secure the infrastructure, it is on the back of a committed customer contract, which allows us, and many of these are with high credit quality customers, to fund this infrastructure using asset-backed structures, continuously reducing our cost of capital and also de-risking the risk associated with this infrastructure. These debt structures are structured in a naturally amortizing, self-de-leveraging manner within the constructs of the contract. That allows us to not only pay the interest as well as the principal on these debt structures, but also have free cash flow back to the parent during the contract period.
This allows us to responsibly scale the infrastructure that is needed for the most compelling and most demanding AI workloads. We're doing it in a very thoughtful manner while we continue to scale at an unprecedented rate.
CoreWeave has no doubt been a pioneer in this regard, and it's something that we watch very closely. We're fortunate having you as the CFO of the company here today. Another topic that comes up quite frequently with investors is around unit economics. We're very lucky to have you to maybe dig in a little bit. I've got a couple of different questions along those lines. I think people appreciate the near-term margin headwinds at the rate you're scaling, but I'm sure you can appreciate this. It's hard to get a clean view on profitability from the outside. Where would you point investors to look to get more comfortable with the return profile? In your S1, you talked about 2.5 years payback period on adjusted EBITDA. Is that still the right way to think about new backlog that you're adding today?
Yeah, so directionally, on an aggregate basis, that continues to be the right way to think about it. That's how we economically price our contracts overall within the tolerance limits of the size, scale, and the length of the term of the contracts. That continues to be the right way to kind of think about it. As we continue to scale our business, you mentioned the near-term impact of the hyperscaling that we are going through, the unit economics remain fundamentally strong in our business in those constructs where we sign these long-term committed customer contracts anywhere between three to five years. Those are take-or-pay contracts, which are non-cancellable, and that allows the company to scale in a risk-mitigated, responsible manner while preserving the unit economics of the business.
Nitin, as we think about unit economics and just the pricing of these contracts and you price rationally, if we then think about competitive situations where you are head-to-head for a large slug of business, how should we then think about where price lands in the stack rank of decision criteria versus the ability to deliver and capability that you offer?
Yeah, so look, a lot of our customers, you know, of course, every customer tends to be price sensitive, but a lot of our customers recognize the importance of a scalable, reliable, performant infrastructure. Those are parameters that we continue to win across the board when we enter into any tech evaluation with any customer, where they look at our platform and our scalability of our platform, the resilience of our platform, and the performance of our platform, and it stands out. You know, we talked about the third-party research. We are the only platinum provider in that category when it comes to GPU AI infrastructure, including the hyperscalers existing today as well as including the neoclouds that exist today. That is for a reason.
Our customer list, you know, as we talked about, when you look at that customer list, it includes some of the most demanding AI customers, and they've chosen not just to build with us, but to continuously expand. We talked about the OpenAI contract that we signed earlier this year, and shortly after we expanded that contract, we signed a new hyperscaler on the platform earlier this year, and shortly thereafter, we expanded that contract with the hyperscaler. This is a testament to how customers evaluate our platform against what's available out there in the market.
Very helpful. Nitin, an opportunity that you've talked about relative to EBIT margins is verticalization, both up and down the stack. What leverage points does greenfield data center development give you to enhance profitability?
Absolutely. You've seen some of our announcements more recently on some of the greenfield projects that we're doing. We're doing one in Lancaster, Pennsylvania. It's a $6 billion investment that we're making in that place. You've heard about our joint venture with Blue Owl for Kenilworth in New Jersey. Where control helps us is making sure that we have operational control for this critical infrastructure as we continue to scale while introducing not just our best-in-class technology, but also cost savings associated with those in our platform. At the same point of time, we want to be very thoughtful about where we invest our capital for the maximum return for our shareholders. You'll continue to see us be in structures where we take operational control, but try to find partners from a capital standpoint that allow us to scale this infrastructure in a responsible manner for our shareholders.
Got it. That makes sense. Maybe as we think about the stack in the other direction, Weights & Biases was a great step in building up your capabilities. What are the biggest opportunities that you foresee up the stack? What level of confidence do you have that your largest customers will adopt these if you ramp investment here?
Yeah, Weights & Biases was an absolutely phenomenal acquisition for us. We are thrilled to have the team join CoreWeave. Over the last few months that the team has been a part of it, the integration has been absolutely phenomenal. We've launched three new products. We did the Fully Connected seminar for Developer Conference for Weights & Biases. In that, we launched three new products, which included the Mission Control Integration in the Weights & Biases stack for CoreWeave, which gives their customers visibility into what's happening in their infrastructure there. We've introduced Weights & Biases Inference product, which allows customers to manage their inference workloads better across the board on the CoreWeave platform. We've introduced Weave, which allows customers to also integrate their GPU workloads across the board. All of these products are seeing great traction in the market, and we continue to build on that.
This allows, like, we've looked at this many different ways. The biggest strategy that's working for CoreWeave is around land and expand, where customers kind of try the CoreWeave platform. They start with a small one-off platform and see rapid expansion after that. We've talked about a couple of examples earlier on in terms of some of the large customers that have expanded, but we also see that in the long tail of our customers. We acquired about 1,600 direct customer relationships as Weights & Biases customers on our platform when we completed that acquisition. We're really excited to work with those set of customers, which are the leading pioneers in the AI space, to help them integrate on our platform and continue to expand. Overall, that acquisition has been a tremendous portfolio add for CoreWeave, and we continue to build on that platform.
Got it. I know we're bouncing around a little bit, but just picking on the topic of unit economics, the debate oftentimes comes back to GPU useful life. It's been very encouraging to hear about being able to recontract hoppers on term for inferencing. Can you just share more how you think about recontracting versus spot to capture the most residual value and economic life relative to the lifespan of a GPU, just given the pace of innovation that we're seeing in the semis world?
Absolutely. This is one that we pay a lot of attention to, and we closely monitor and see where the market is heading and continuously adapt to it. A few things there. We feel incredibly comfortable with the industry standard of six-year depreciation for the GPUs. What makes us comfortable around that is the long-term structure of our contracts and our ability to continuously recontract them for different use cases as they come off their first contract. We've seen that with Ampere. We're seeing that with Hopper as some of them, the early contracts roll off. We are seeing that across the board in terms of the use cases that are developing. What we are seeing from a customer use case perspective is that it's not a one size fits all approach from the customer end workloads perspective.
This is becoming increasingly more prominent with, for example, ChatGPT-5, where not only the query itself, the application itself is deciding which model suits best for the customer user query and then delivers it to that model versus a unified model. Those models are working on all different kinds of infrastructure from Ampere to Hopper to the Blackwell generation. This breadth of use cases and models that are running inference has allowed us to continue to use and recontract those for use cases that customers have. Many of our customers have models that are optimized to run on Ampere, and they want to continue to use that. Many of our customers have that for Hopper. As recently as last quarter, we've done deals with Hopper with our customers when it's, what, three years almost in its life cycle. We've done deals with our customers.
We continue to see those use cases evolve. We talked a lot about inference use cases proliferating beyond what you've seen as the top tier kind of consumption with the most leading pioneer AI labs in terms of the next layer in the enterprises as well as the smaller offshoots that are coming in, like Hippocratic AI, the Moonvalley example that I quoted earlier on. We are seeing that happen. We feel very comfortable that the infrastructure demand for AI in this chronically and structurally undersupplied environment is going to remain robust across infrastructure generations. We are seeing those examples and those things happen on our platform today.
I think it's really important, and I think perhaps even underappreciated. Just as it relates to other perhaps underappreciated opportunities, also the cost of capital, something that you already touched on, where you've done a very impressive job reducing borrowing costs with each successive raise recently. What are the drivers from here to keep moving that further down? In your conversations with the rating agencies, what are they looking for in terms of a path to investment grade one day?
Thank you, Brad. Absolutely. A couple of our stated goals as we went into the IPO process from a cost of capital and capital perspective were getting access to broader and cheaper pools of capital. Over the last six-ish months that we've been a public company, we've relentlessly executed on both of those goals. We just recently closed our DDTL3 facility, our delayed drawdown loan third facility, where the interest rate was SOFR plus 400 for a non-investment grade client, a 900 basis points decline over the similar interest rate from the last facility for non-investment grade customers. That shows an example of how relentlessly we've been able to bring down our cost of capital. In addition, this DDTL3 facility was funded completely by top-tier investment banks. There was no private credit lending involved in that facility.
In addition, we've done two high-yield issuances since we've gone public, both of which have been incredibly successful, were oversubscribed and upsized, and were at increasingly lower cost of capital to the company. We continue to be on that trajectory. In terms of the credit agencies, we've received incredibly positive feedback from them. What they are watching the company is on execution in terms of continuously executing against its business objectives. We are well on way of path to become an investment-grade company as we continue to scale our business.
You've done a great job in this regard, and we continue to watch the progress. It's a really important part of the story. Maybe Nitin, just shifting to the demand environment as we sit here today, it seems that the demand for AI compute is almost insatiable. What are you hearing from customers in terms of their ambitions and needs as you look ahead into the pipeline?
Absolutely. Mike Intrator, our CEO, kind of talked a little bit about this during our earnings call earlier this month. The demand remains relentless. We're still in a chronically supply-constrained environment where capacity constraints, especially around powered shell capacity, are the biggest constraint driver for our growth. We're still in an environment where demand outstrips supply. We're continuously seeing that demand continue to grow with our customer set. About 18 months ago or so, we were talking about 10 MW+ scale deployments with our customers, which then transitioned to 50, 100 MW kind of deployment scales for our customers. Today, the conversations we are having with our customers on the pipeline front are more in the gigawatt plus scale of deployments. The demand for this infrastructure continues to expand at an unprecedented rate. We are very well positioned to take advantage of that demand.
We talked a little bit about expanding our footprint around our contracted power. We have 2.2 GW of contracted power in our portfolio, and we are continuously looking to add more. We have 470 MW of active power at the end of Q2. Mike talked a little bit about this as well on the earnings call, where by the end of the year, we are projecting to be over 900 MW of active power. We are rapidly scaling in terms of our capacity to meet the end customer demand. The demand continues to outstrip supply at this moment for us. The signals in the market continue to be that way.
I mean, it's breathtaking. It really is. Just to put it down into one word. With the scale of what you're talking about and the dynamic nature of this market, how do you approach planning for some of the longer lead time inputs like land, power, as you talked about, data center shelves, and what's the limiting factor to how fast you can grow over the next three to five years?
Yeah, again, the limiting factor continues to be for us supply. Supply around powered shell capacity for the quality of infrastructure that we are looking at. In terms of how we think about it, this is an area where we do put at-risk capital to use, where the longest pull, like lead time items, is the powered shell capacity. You see us leaning more and more into it. We've recently announced our intent to acquire Core Scientific, which is a great portfolio addition for us, where not only we get access to 1.3 GW of gross power they have, but also a gigawatt of expansion power that they have in their portfolio. These are similar strategies that we'll continue to deploy as we look for incremental capacity across the board.
Our two data center announcements that we made in Lancaster, Pennsylvania, as well as Kenilworth, New Jersey, are also examples on how we continue to look forward in terms of adding to our portfolio of secured power or contracted power. Planning, as I mentioned, is kind of really hard in this environment. The way we think about demand is more in terms of where are our customers signaling. We are fundamentally client-led in how we build. We're building where and what our customers are asking us to build. We're not in the business of speculatively building infrastructure. We're in the business of building infrastructure on the back of strong committed customer demand. Our power strategy, or how much and where we procure, is driven fundamentally with where our customers are leading us.
It's absolutely what I mean. We're in unprecedented exciting times. You guys, as I said already, are at the bleeding edge. The constraints that you face, the demand that seems insatiable, these are good problems to have. With all this demand, it requires a lot of funding to service it. I know we already touched on this a bit, can you just frame for us your financing strategy to capitalize on all of this as we look ahead and we execute against this massive pipeline?
Absolutely. This is one of the, I would say, core CoreWeave strengths. I would probably say this is one of the underappreciated strengths that the company carries, to be able to fund this infrastructure at the scale in a responsible manner at the scale of which we are building. CoreWeave has a demonstrated ability of doing so at scale and increasingly lower cost of capital. Since 2024, we've raised commitments over $25 billion of investment in the company, and all of them have been at increasingly lower cost of capital. Building and scaling this infrastructure at the unprecedented rate and scale at which it is happening right now requires a very sophisticated financing strategy. CoreWeave is very well positioned to be the pioneer and execute that strategy at scale.
We've done, as I said, like north of $25 billion of committed funding in the company since 2025, and we are just beginning to scale this infrastructure. What you will see us in the market is continue to scale on that engine. The core constructs of what we do remain the same, where we build our CapEx fundamentally on a success-based, which is we build infrastructure, the infrastructure that goes within a data center based on committed long-term customer contracts, such that we can finance that infrastructure. That debt is self-amortizing, naturally de-leveraging within the contours of the committed customer contracts. Many of these contracts are with high credit quality customers. As the credit markets get more comfortable with the new AI labs and their credit profile, there is only upside associated with that for us.
We continue to be executing relentlessly on this sophisticated financing strategy to build this infrastructure at the scale that the demand in this market requires it to be built.
Thank you for that. That's very helpful. I want to maybe turn back to your competitive differentiation. You know, one argument that we've heard from some of the hyperscalers is that ultimately most inferencing will take place closer to an organization's data, which largely sits in general purpose clouds. It also requires a number of ancillary services to support production applications. Can you talk about the things that you're doing on top of a core platform, whether that be with Weights & Biases today or other initiatives going forward, that perhaps makes you a destination for full-blown AI applications?
Absolutely. Look, there is a whole plethora of services that the general purpose clouds have built around those general purpose workloads. We're not in the business of building the whole service stack for those general purpose workloads. We are very focused on building the stack that the AI developers need to run their AI applications. Mike touched a little bit again on this earnings call associated with the storage deployments, for example, that we've had. We've had storage deployments which are AI-centric across multiple different vectors, from IBM Spectrum to DDN to VAST to Pure Storage that we've kind of deployed for our customers at scale on our platform, which allows our customers to scale based on their needs. We are fundamentally client-led in that approach from what the end customers need. We will build the platform that serves to their needs.
Weights & Biases is a great example of integration in that platform. The early integration efforts have gotten tremendous customer feedback in terms of what they need and how this platform needs to be built. Our learning cycle is based on where our customers want us to build, what they want us to build, what kind of services are most important for them. Data as a gravity is very important, which is why these storage solutions, we're not married to a particular storage solution with a cloud provider. What we are building is what our customers need and meet them where their demand is. That's a core strength that CoreWeave continues to execute on. We continue to build a stack based on those principles.
Am I wrong to think that your customers, especially enterprise customers, would demand that you and their general purpose cloud provider accommodate some type of interconnect or being able to access resources both within and without CoreWeave?
Much of it is already happening. A lot of our customers today are operating in dual clouds, where some of their workloads on the general purpose side sit on those general purpose infrastructure, and then many of those workloads on the AI side are running on our CoreWeave platform. It happens in a manner that is seamless for the end customers. Much of what you're describing is kind of happening today on the platform. You'll see more and more of this different cloud approach where customers will choose for the AI workloads the cloud that is best suited because that is very critical from a business standpoint for those customers to run in a reliable, efficient, and scalable performance manner, which is where the CoreWeave platform shines.
Thank you for that. You're in the business of AI. AI is purportedly replacing labor or will over time, augmenting labor. At the same time, the one expertise that is still in super high demand is AI talent. We hear a little buzz around AI talent wars. You guys have done a great job attracting truly exceptional talent to date. How do you ensure you retain and continue to add to this amazing team?
Yeah, so you know, the one thing that stands out about CoreWeave outside its technology prowess and its financial know-how is its culture. I think people join CoreWeave for the culture in terms of, you know, having a healthy disregard for the impossible and working together collectively as a team to achieve those impossible results. That has been a key driver for us to continue to attract as well as retain talent because we are building, you know, a generational company here with challenging the conventional norms that this industry was set up at and scaling at a rate that has been unprecedented. Allowing, you know, the company to scale with a culture of collaboration, with a culture of having a healthy disregard for the impossible, and being the bleeding edge of technology deployment at scale is what continues to kind of, you know, drive the growth for the company.
Got it. As we also think about the various dimensions, the growth and how you will achieve that, as you think about securing the capacity you need going forward, is additional M&A a la Core Scientific on the table? How are you thinking about build versus lease going forward?
Yeah, so Mike's talked a little bit about our M&A strategy in the past as well, where we think about M&A in two different vectors, which is number one is around strategics, which is where Weights & Biases kind of came into picture, where we are going up stack. We identify it as a strategic play for us to continue. The second is around operational efficiency and opportunistic in terms of executing for our business goals, where Core Scientific kind of fits in, where it allows us to scale as well as eliminate through operational efficiency. We've talked about this publicly that we expect to generate about $500 million of annualized run rate savings by the end of 2027 in terms of this acquisition. We'll continue to think of the acquisitions in those vectors. Having said that, we have nothing on the horizon outside the Core Scientific acquisition that we've already announced.
Fair enough. I think we're almost about out of time. I've got one last question for you. What are you most excited about that we haven't touched on?
I think we've touched a little bit upon this, but I think what's most exciting for us is the inference, the proliferation of the inference workloads across the industry and across different use cases. We're now seeing not just the large AI pioneers kind of capitalize on it, but we're also seeing enterprises across different industries from IBM to BT Group to Morgan Stanley, Goldman Sachs to Jane Street kind of use in the financial markets, but also smaller offshoots around Hippocratic AI, Moonvalley kind of come up with new use cases that are allowing this platform to scale. Inference is, as we talked about, the monetization of AI. Seeing those use cases develop is really exciting for our platform. Given our platform is built to be an AI platform, not a training or an inference platform, that's really exciting. Having our customers have that ability to fungibly move across their workloads on the platform is super exciting for us.
Really exciting times. Really great to have you here.
Thank yiuyou for that.
Great opener to set the tone for the rest of this event. Nitin, you know, it's always great to see you, but even better here at the Tech Conference.
Thank you so much. Thank you so much for having me.