Excellent. Thank you everyone for joining us this afternoon. My name is Keith Weiss. I run the U.S. Software Equity Research franchise here at Morgan Stanley, and really thrilled to have a opportunity to talk to Brannin McBee, Co-Founder a nd CDO of CoreWeave. Brannin, maybe just to open up the conversation, already we've seen a growth algorithm from CoreWeave that I haven't seen in my career, right? I mean, one of the things I love about research is getting to understand new companies and new opportunities and new businesses, and this is all new, right? Over the last couple of years and the last couple quarters, you guys have described that demand that's already showing up in a huge backlog, already showing up in growth rates that are really eye-popping as insatiable and relentless.
Which to me means that we're not gonna see this come to trickle out anytime soon. Can you talk to us about where you've seen this demand come from? How foundational is it, and how certain are you in sort of the durability of this demand, not just through 2026, but you guys have started talking about targets into 2030.
Yep. Yeah. Look, you're hearing this across the space, right? You're hearing it from our peers, you're hearing it from our clients, you're hearing it from our suppliers. The demand profile is truly overwhelming, insatiable. I would characterize 2026 as probably broadly sold out in terms of billable compute capacity that's available into the market. It's robust, and we'll get into this throughout the conversation, but it's robust across several sectors. Whereas I would say previously, like, it was isolated AI Labs, right? That was really the starting cohort of where demand grew from, like, 2022+ . For us, it hit our hyperscaler cloud clients, right? They were coming to us to support their product build out. Now I would say we've seen this rapid advancement of enterprise demand.
The enterprise cohort is absolutely there, whether you look at our numbers for that, you can look at some of the numbers floating around with Anthropic and market for just a true understanding of how quickly enterprise adoption is scaling right now. Like, it's truly fascinating. You know, it Within our guidance, we offered color of exiting 2026 at $17 billion-$19 billion, exiting 2027 at over $30 billion in ARR. To contextualize that a little bit further, we exited 2025 at $6.7 billion in ARR of demand, right? There's just these massive step-ups in change in revenue. Within all that, the customer behavior is changing as well. There's two main points I'll hit on there. One is the customer is looking for longer duration contracts, right?
As most people in here know, we sign multi-year take-or-pay agreements, right. 24 months ago, those were, call it, three-year contracts. 12 months ago, they were four-year contracts. I would say now in our $66.8 billion of backlog that we have, that is five-year weighted contracts, right. With some contracts in there extending up to six years. I struggle to see visibility, like materially beyond that, the customer profile is saying, "We want this infrastructure for longer." As a reminder, that's for like kind of single SKU exposure. They're coming in saying, "We want Hopper for five years. We want Blackwell for five years." That's one aspect of customer demand. The other aspect that we're seeing that's so strong is on older generation infrastructure, right. Clients are coming in asking specifically for A100s.
They're asking specifically for H100s, H200s, and of course, for Blackwell. It's not a cadence of they come in asking for Blackwell and they can't get it, so they're like, "Okay, I guess I'll take Hopper instead of it." The driver of that is they've engineered their workloads. They have specific use cases for those specific pieces of infrastructure, right? That to us, a main driver of that is inference, obviously. We'll chat about that more as well. This all speaks to this, like, highly sustainable demand profile, not only for latest generation compute, but the broad set of infrastructure that's being delivered into market today.
Got it. When you talk about, it's this insatiable demand and there's other vendors talking about it as well. And you're right, it's not just CoreWeave talking about it. I fear that might overlook the advantages that CoreWeave has in the marketplace, right? And the question that I get from investors a lot is there a differentiation, right? Is this just they're able to provide the capacity and therefore they get the demand, or is there some reason that the customers are coming specifically to CoreWeave? From our work, there is, right? There is differentiation in terms of your ability to build out faster than anyone else, to get the most recent technology out there faster than anyone else. Probably most importantly, to keep it up and running more durably than anyone else.
Can you talk to us about how you guys have built out those competitive advantages, how durable you feel they are, and how you're gonna extend those even further with the software layer?
Yeah. I think very intentionally, right? This all goes back to 2019 when we were really standing up our CoreWeave platform originally, that CoreWeave platform was built around this concept of parallelizable workloads. The fundamental idea that parallelizable workloads are different than serializable workloads, has different infrastructure, different operational infrastructure requirements around it. You have to build the thing for the thing, so to say. If you don't, you're asking your clients to take compromises, right? Compromises in stability, scale, ultimately performance. The product that we have in market today, I think, is widely recognized as the most performant solution for operating parallelizable compute at supercompute scale, right? These are supercomputers that are being delivered in the market, they're wildly complex to not only bring online, but also to stabilize.
It's the CoreWeave like operational infrastructure suite that sits on top of it that allows for that to exist. Who recognizes that? Third-party consultants recognize that, right? I think SemiAnalysis does a phenomenal job really benchmarking the different solutions that are out there in market for running this infrastructure. You know, we've been singular through two of their first two reporting cycles now. It's our clients who keep coming back to us for the product that we have. They're choosing to work with CoreWeave over and over again. That is diversifying within not only the AI Lab Cohort and the Hyperscaler Cloud Cohort, but also within the Enterprise Cohort, right?
These are the blue-chip enterprises that are coming into CoreWeave saying, "These guys are running this infrastructure correctly." It's a long-winded way of saying this infrastructure is not fungible, right? An H100 at one of our at one cloud is not the same as the other. Within that set, we are the best operationally at this infrastructure. How do we keep that pace? I mean, this is our business. This is what we invest in on a daily basis. We have incredibly tight engineering relationships up and down the supply chain. We're working incredibly closely with our suppliers, with our clients, with our data center operator partners to understand and deploy what is the most effective engineering solution to delivering this supercompute infrastructure at scale.
At the end of the day, that's like a lot of proprietary information that comes into our business, right? Like, we are solving the problems of the most intensive AI users in the world on a daily basis, and it allows for us to kind of skate to where the puck is going, right? You know, an example that I really love, and I'll stop talking is when chain-of-thought models were introduced into the market, this was at a time where I think the broader, you know, buy-side, sell-side thesis was models may quantize. Everything was getting smaller. We're gonna run lots of models on one GPU. Perhaps you don't even need data centers to run models in.
You give these engineers the most performant infrastructure out there. Then they started looking at it saying like, "Well, what if we made the model bigger?" Inference left the GPU and left the node. You could use hundreds of GPUs to run inference on instead, leveraging a high-performance bandwidth capability between those GPUs. You got chain-of-thought reasoning that was introduced to models, and it was like a complete paradigm shift in the way that the infrastructure was consumed. That immediately led us to understanding memory is gonna be a path that really matters, right? Like, how much context can you hold in memory on the nodes and within the clusters? Where is a similar analogy today? I think it's agentic workloads, right? Agentic workloads, we're seeing increasing pressure on peripheral demand.
I would say like CPU demand is absolutely gonna increase as agentic workloads are scaling. We're seeing that pressure from clients. Storage is another component that's been really exciting for us across our client base. We disclosed in our Q4 earnings that we have a, I believe it's a greater than 80% attach rate with our clients who we generate more than $1 million in revenue from for our storage product. Our storage product today is well into the, well north of $100 million in ARR. That's a product that, like, didn't really exist not that long ago, right? Our ability to attach peripherals and for them to scale quickly and be quite attractive for us operationally and quite attractive for keeping customers on the CoreWeave platform, I think is a really exciting opportunity for us.
Got it. You talked about the tight relationship with your suppliers and one of those big suppliers and an investor is NVIDIA. You talked to an expanded relationship with NVIDIA this quarter, talking about a couple of ways that you're gonna be working more closely together. There was a $2 billion incremental investment, which is interesting, but even more interesting is what you guys are doing together with software.
Yep.
Can you dig into that? What's the nature of that relationship, and how is software becoming a bigger part of the story at CoreWeave?
That announcement in January, that more comprehensive relationship was all about accelerating growth, right? Accelerating our ability to grow at the pace of AI adoption. I think the software side of it is you're obviously correct to highlight, and it's this acknowledgement that the CoreWeave software stack is the best way to run this infrastructure. You know, for us, when we discussed it in our earnings was this can lead to an opportunity for us to sell that software solution to other entities. For entities who may want to have GPUs on their balance sheet, right? Like, or they might have data sovereignty priorities, and thus they need to have ownership of all the infrastructure.
That ability to take that into market, I think is a very margin-accretive path for us to be able to go down.
Yeah. Outstanding. All right, insatiable demand, market-leading product and solution that you're bringing to the market, and it's created this tremendous backlog, $67 billion of revenue backlog as of the last quarter. Along with it comes a lot of CapEx. You got to build out the infrastructure to be able to support all this demand. You got it to $30 billion-$35 billion in CapEx. I think one of the investor concerns is the financing of that on a go-forward basis. Can you talk to us about how you're planning on financing that level of investment in 2026 and beyond?
Yes. I think that we've been kind of market leaders and thought leaders in how we finance this infrastructure, and starting with our original DDTLs, I think three years ago at this point. We have parent co and asset co is kind of how we break the business down to, right? All of our assets sit at asset co. That $30 billion-$35 billion guide on CapEx, let's call it $32.5 billion midpoint, that all sits at asset co. Asset co, we're able to bring these financing facilities into where I would say we have extreme levels of demand to participate in the paper.
One aspect to highlight, like you've probably seen headlines about an asset level raise that we were working on, while I'm not gonna talk explicitly about those headlines, that's something that we're very excited to announce in the market. This advancement of the structuring at the asset co, I think is all representative of not only our execution track record, but also the durability of our contracts and our data center agreements. It all says, "Yes, we love CoreWeave paper. Yes, we love the way that we're bringing it into market. We wanna underwrite more and more of it." That backlog will get financed down at asset co with some participation from parent down into asset co.
I think the other aspect of your question was a little bit driven by margin of the business, near-term margin. Look, like I spent the last two days at LevFin in Florida, at the end of the day, CapEx requires an investment, right? CapEx requires an investment period as it comes online, we brought some slides onto our IR deck that I encourage everyone to go take a look at that breaks this down in more detail. The net of it is we have a sort of three-month investment period on bringing CapEx online. Once that CapEx is online and stable, take it like month three to 60 in a contract, each contract or each deployment really has a 25% contribution margin up to the parent, right?
For every dollar revenue that's coming in to these deployments, you have $0.25 going straight up to the parent or straight up to the whole co afterwards, right. You take that one deployment and you now layer it with lots of deployments that are coming online over time, and you have this very robust sort of revenue stream going up to the parent. Where are we today? Today, we're in this extreme growth period, right. We're quarter-over-quarter growth. I think we grew our active power by nearly 30% quarter-over-quarter, Q3 to Q4. When you're incurring the expenses of growing active power that quickly, it's of course gonna weigh on parent co, right. 'Cause you're in this like three-month period where you have re-revenue starting to generate, but you're paying for the data center costs.
You're paying for the beginning part of depreciation on the infrastructure during that ramp period. When you have large blocks coming online on like a smaller amount of infrastructure, or in other words, like such large % growth, it'll weigh on near-term margins. All this is to say it's an extremely intentional path that we have to growing our business and moving at the pace of AI demand, right? Like, this is a phenomenal opportunity for growth. We're doing so in an incredibly risk-controlled and risk-adjusted manner, with highly accretive contracts that underpin the entire business.
Got it. The two real veins in there. On the financing side of the equation, the people looking to finance you are getting more convicted, not less convicted, sort of in the underlying business.
Yeah.
Your financing costs, if we look at it from what we've seen over the past couple of years, have come down from 12%- 9%. There's an expectation that it's gonna continue to come down in terms of what it's gonna cost to finance it. The other side of the equation, I think scared investors on the most recent conference call was a forward operating margin forecast that was below what consensus had. The other part of the equation of what was wrong and what consensus has was the CapEx number. We were well below what you were expecting, there's just a natural, like, absorption period, right? That compresses margins in the near term.
Yep
As CapEx is ramping that quickly. Yeah.
Yeah.
That wasn't a question. That was just-
Yeah, yeah. No, I completely agree.
Great.
Yes. Like,
So the-
CapEx requires investment.
Yeah.
We're at the, kind of the trough. Q1 is the trough.
Right
Of that margin profile for us.
Right.
From here, it's expanding.
Right. And I think, part of the reason why that touched a nerve, like with operating margins, what we're hearing in the marketplace in terms of there's the demand that you're seeing, but you guys are also creating a tremendous amount of demand, and not just you. It's all the hyperscalers are creating demand, and component costs are coming up, and memory costs are skyrocketing.
Mm.
It's hard to get people to actually build out these data center facilities. Can you talk to us about that side of the equation, like the degree of difficulty in executing to these build-outs?
Yeah
... and keeping the project, like, on time and under budget, if you will, right? How do you maintain that margin profile with these potential costs?
Yeah. Supply chain. Supply chain is immensely difficult.
Yeah.
I think it's something we've been quite vocal about, private, public version of us, like it is. These are utterly enormous engineering projects that are being brought online. I'd encourage, if you ever have an opportunity to go to one of these sites, like, please go visit them, and I think you'll begin to get an understanding of just how hard it is to bring these things online.
They should be invited because there's a lot of security at these data centers.
Yes. No, we saw. There is. I think that that's been underappreciated by the market, right? Like, we're throwing out terms like 1 gigawatt, 5 gigawatts, 10 gigawatts. Like, it's just, you know, a number on a spreadsheet, but the reality is, like, it is 1,000s and 10s of 1,000 s of people to deliver infrastructure at that scale. I think, you know, if we were to ask, like, where is the bottleneck in the market today, I would differentiate between power and active power. I mean, power and data centers, right? It's less about electrons, right? We observe that the electron availability on U.S. grid is there.
But it's how do you deliver those electrons into the racks, into the servers, whether it's the physical infrastructure that sit on the site, like, you know, transformers, backup gen, backup battery, everything along those lines, or it's just the people, right? Like, like electrical engineers are an incredibly critical part of delivering these sites. You can't really make more electrical engineers very quickly, right? That is a skilled trade that takes years to bring a workforce online for it. I think that that is where the bottleneck of growth is in the near term. Where do we sit within those profiles? We, we predominantly lease our data center capacity. We're doing a little bit of self-development, ourselves, predominantly it's we're, I think, at 43 active sites in operation, right?
It's not that we just have, like, one or two sites that we're looking at that, like, dictate the success or failure of our business. We have 43 that we've already delivered. We know how to do this. We've done it for years. We know how to navigate the complexities in supply chain. As I'm sure everyone recalls, in Q3, we disclosed that we got surprised, right, by one site. We bring a lot of conservatism into our forecasting. We know how our operators work, but everyone gets hit by supply chain problems, right? Like, it is just incredibly hard. That one site, we worked closely with that operator to get it back on track.
We're happy to say, as we disclosed in our earnings, that site is firmly back on track. I think we actually delivered relative to our expectations a little bit early on that site. We're only able to do that because we have all this experience of executing on sites. Will it remain challenging? Absolutely. Do we build a lot of conservatism already into that supply chain? Yes. It is a, I think going to see these sites in person brings a lot of context to them.
Yeah, definitely. Maybe to just double click on that. I'm not a hardware guy, but I know some hardware guys.
Yeah.
When memory prices are up 4x, 5x, it seems like a pretty bad day for Dell, right? It seems like a pretty bad day for HP. Is it a bad day for CoreWeave in that same way? Like, Is it too small of a part of your bill of materials, or are you able to pass on those higher costs to your end customer?
It's a very small component of the actual node costs, right? Overwhelmingly, it's on the GPU side...
Right
... Relative to memory. For us, I would say we're far more focused on supply chain, right? Like ensuring that we can get the components, 'cause you're absolutely correct. Like, as component costs increase, that just gets passed on to the end consumer, ultimately, right? Same with, like, electricity costs.
Okay.
If electricity costs were increasing on new sites that we're entering into contracts with, that gets passed on to the end consumer. I think the end consumer is very comfortable with, like, regional pricing, for example. For us, it's entirely supply chain and ensuring we get the components to deliver the infrastructure to our clients. You know, going back to my example earlier on chain-of-thought being introduced to the models, it was Q1 of last year where we really saw that there was gonna be this increasing focus on memory, right, within the LLM space. That was-driven by how close our relationship is with our clients on an engineering level to understand what matters to them, where the technology is going. It gives us this really unique lens to understand how the demands of the market are going to evolve.
Memory was not that much of a surprise to us, right? We see peripheral demand in general increasing.
Mm-hmm.
I think that's, you know, only exciting for our business because those are all products that we're able to offer into our clients to bring them more robust, like, full cloud experience.
Got it. On the other side of the equation on power, it's. There's been a lot of concern about access to power and availability. On the most recent conference call, you guys talked about a goal of getting incremental, like, 5 gigawatts of power by 2030. How comfortable are you on the ability to find that, the ability to source that? Is there regional difficulties, is there global difficulties, or are you guys pretty comfortable that, "No, we got a pretty clear line of sight to being able to bring on an additional 5 gigawatts"?
I think that power's out there. It's, it's again gonna be more navigating the supply chain on the data center side. I believe our data center partners are very focused on bringing online these larger and larger deployments. I think super important for us in there is we are only procuring that capacity as demand is there for it, right?
Yeah.
The way that we approach demand in our general capacity procurement in general is maintaining conversations with our largest clients, and it's more of a cadence of us asking them what they're looking for on what time frames in what regions. That informs us to go back out into the market to find the sites, whether it's 250 megs, 500 megs, gig of deployment. That conversation cadence probably sits 12- 18 months in advance. While we won't sign a agreement with them as we're signing the contract for the data center, like, we've soft-circled our clients of like, "All right, this client wants 250 megs in Q3 of 2027. Let's go procure that site," Then we enter, you know, negotiations for passing that site through them for billable GPU hours.
Okay. Then when you think about you started to build out some of your own power shells and started to do some of the development yourself, but the majority of what you're doing is vis-à-vis your partners and having a lease against that. How are you thinking about that dynamic going forward and the balance?
Definitely
How important it is.
Yeah
... Both parts of the equation or does the lease get most of it done or enough of it done?
I would say the important part of it is getting access to the active power. We refer to active power as something that we can step into and start delivering infrastructure out of.
Mm-hmm.
Right? What we care about most is getting access to that active power on a timeline that our clients want, because from there, I think we have the best execution in the industry of delivering stabilized supercomputers once it's been given to us, right? They're measured in weeks.
Yes.
I think we would probably say, like, four to six weeks, somewhere in there, on a deployment level perspective. What will that mix be? We're already so engaged in the engineering side of these sites, right? It's not like there's just blueprints out there of like, "Oh, here's how you go build a NextGenAI data center," right? Like, we're heavily involved on the engineering side and the deployment side that we're kind of already there.
Mm-hmm.
I think we really benefit from continuing to build out that internal competency of build ourselves. We will remain opportunistic in the build verse lease approach. I think we'll have a healthy mix. I think we'll likely be bringing online some more self-development in there, especially in the context of 5 gigawatts. I think we benefit from being a strong component of that. We're not gonna offer, like, an explicit guidance, 'cause it'll ultimately be all informed by customer demand. What the customer is looking for, we'll prioritize around it accordingly.
Got it. You mentioned the investor deck that you guys updated on Monday, I would definitely tell everyone to look at it. There's a lot of really good data in that investor deck. You talked about the 25% contribution margins in a five-year contract in months four through 60. Can't do that math in my head. It's probably more than that. It's more than 60. I really messed up that math. You know exactly what is that, like, fine, through five-year.
Yeah, months three through 60%, 25% contribution margin.
... The entirety of the contract that you guys talked about, like, a 15% free cash flow margin.
Yeah.
That in and of itself against the $67 billion backlog, super interesting.
Yes.
What's even more interesting is if you could continue to monetize that asset in year six, seven, and eight. I think that brings us back to one of the big debates around GPUs and the GPU economy is the useful life.
Yeah.
What is the useful life of that GPU? Should we still be thinking six years? Should it be longer? Should it be shorter?
Six years. I think we've been very consistent about that. We're aligned with our peer set on six years. I think that is the absolute right number to be using for depreciation today. This is kind of funny. In our analyst bring down call after earnings, you know, we went through 45 minutes of group call of, you know, questions about model business, et cetera. It was the first time that no one asked about useful life. And this has been a persistent question for years. Like, what is useful life? And I took a little bit of a sign that I think people are really coming around to understanding that six years useful life is the correct number to be using, and there could be a little reality that use wise may extend beyond six years.
You know, the oldest infrastructure that we really have at scale from like an empirical data perspective is A100s, right. A100s are a 2020 SKU. That's putting it, like, right at six years right now. We disclosed in earnings that in 2025, A100 pricing for us actually increased, right. It held its pricing power. A100s were within 10% of where it was at the, at the start of the year, right. I don't look at those as like a relative metric. between each other. I look at those both as these immensely strong demand signals that this older gen infrastructure, I think some portions of the market thought just went to zero value after two years or something, is really not.
It's driven by this fact that there are use cases specific to the infrastructure, right? I think it's driven by inference, so heavily inference. I mean, inference is just exploding, and the monetization of inference, I think that very real revenue and return on revenue is being driven off of that. We're signing five-year contracts for committed take or pay use on the compute. You know, one year of life that would need to be extended on the end of that. I think overwhelmingly, we are seeing the empirical support. At six years, we would see no reason to change that in the near term, but we're quite excited about the prospect, especially from a margin perspective of useful life beyond six years.
I think that that is a reality that we're about to enter for our business.
Got it. NVIDIA has been dominating the GPU market or the accelerator market for some time, and you guys have been very much a NVIDIA fleet. There's more silicon coming into the marketplace, and there's a lot of buzz around TPUs right now. Is there any chance that we're gonna see CoreWeave work with other silicon anytime soon, and why or why not?
Yeah. It's a question we've had for a while, our answer has very consistently been, we are client-led in what we build, right? We're not purchasing compute, building it and hoping people come and use it, right? The path we take instead, which I think is a much more risk-adjusted path, is we wait for the client to come in and say, "This is what we are looking for you to build." This is how we enter CapEx and why our CapEx is so de-risked when we enter it that way. The client, and it could be a little self-selecting just because we are so well known as an NVIDIA shop, but the client's asking only for NVIDIA. Like, we aren't getting requests for other types of silicon. Make no mistake, like, we can run anything, right?
We're hardware agnostic as what we operate, and I think a really good example of that is this transition we went from Hopper into Grace Blackwell. For all intents and purposes, that is different infrastructure, right? It is an entirely different way to build and deliver compute once you get into an NVL72 configuration vs just 42U air-cooled racks. Our software just adopted right along with that, and we were first to market bringing H100s available to clients, H200s. We were first to market with GB200s, GB300s. That's all because of this software solution that's able to operate really any type of infrastructure underneath it. But again, we just don't get demand for anything but that NVIDIA infrastructure right now. We think it's the most performing platform that's out there.
Got it. One last topic I want to hit with you. We talked a little bit about the software, the enablement software that enables you guys to run these GPUs so effectively. You've also built out a whole another layer of software on top.
Mm-hmm
... To expand the capabilities of what you could do. A lot of it through acquisitions. There's Weights & Biases, OpenPipe, Monolith, Merino. How should we think about the software strategy for that layer?
Sure
Right, within CoreWeave? Is it something that becomes material to the story? It's hard because you guys have built up such a backlog of the GPU service.
Yeah. It's, it's something to have a playbook that we've all seen, right? The playbook of how do you build the cloud to run the internet and host websites, store data lakes. The first step in that playbook was get the infrastructure right, get the foundation correct, allow for people to run their workloads on you efficiently and build the purpose-built platform for the highest performance possible. Then what do they do next? They start building apps on top of it. They started adding peripherals on top of it. I think that's the exact same cadence that you see us in right now. We are the best operator of GPU supercomputer on the planet. I think that is an overwhelming agreement. That's with our suppliers, our clients, the broad market.
Next step in that process is build app layers on top of it, add peripheral infrastructure around it. You're seeing us go through that in real time. Absolutely correct. It's hard to say how all that scales, but I look back to our storage product, which is well north of $100 million in ARR that has that 80% attach rate with over $1 million clients. That product, like, came into existence out of nowhere. That scaled so quickly for us. I think that speaks to how the velocity at which our clients are able to adopt these peripheral components, these software components that we're bringing around our core products, that's a really exciting thing for us.
Outstanding. It's been a rocket ship of a story. Congratulations on the success, and thank you for coming and sharing with us at the Morgan Stanley Technology Conference.
Thank you so much. Appreciate it.