All right. Good morning. Welcome back, everybody. I'm Joseph Moore, Morgan Stanley, semiconductor research. One of the real highlights of this conference for me, CEO of AMD, Lisa Su, is here right on the back of signing a major deal with Meta. Very excited for this conversation. Lisa, thank you for coming.
Thank you, Joe.
Maybe before we go into details, can you just start off of, you know, the overview of what you're excited about for 2026, how you think this year is going to play out?
Great. Well, first of all, it's great to be here. Nice to see everyone bright and early at 7:00 A.M. Look, I think what we saw coming out of 2025 was just a lot of momentum, a lot of demand for high-performance compute and, you know, really an environment where, you know, it favors strong product cycles and deep customer relationships. 2026, you know, if I think about the first few months, is shaping up to be, you know, again, a very exciting year. We're very excited about the data center business, the overall growth potentials. We're launching MI450. It's looking really good this year. We have some, you know, great, you know, customer, you know, deep customer relationships to talk about there.
Frankly, you know, we see just a tremendous demand for traditional compute as well. If you look at the CPU cycle, we've always believed that the computing stack is heterogeneous, and you're gonna need CPUs and GPUs and FPGAs and all of these components. That's really, you know, coming to fruition here in 2026. A few months ago, we had our Financial Analyst Day. You know, we put out an ambitious financial model to grow, you know, sort of 35% CAGR over, you know, sort of the next, you know, three, four, five-year period.
I think as we look at the market dynamics, as we look at the product dynamics, I think we are, you know, very much on track to that and, you know, with an ambitious target of, you know, over $20 of earnings per share in that timeframe. Lots to be excited about.
Great. Well, thank you for that. Maybe the big news last week, the deal that you signed with Meta. Maybe give us a little bit of an overview of that deal first.
Absolutely. Very excited about the, you know, deepening our strategic relationship with Meta. You know, if you think about what it really takes to build, you know, long-term lasting partnerships, it's really about roadmap alignment, technology alignment, and, you know, aligning, you know, sort of our capabilities with what the customer is trying to achieve. You know, Meta is a customer that we've had a long-term relationship with. They've been a deep, you know, user of CPUs throughout their data center portfolio. They've also been early adopters of, you know, MI300 and MI350 series. What we wanted to do at this point is, you know, we actually see an inflection point in AI infrastructure. What we're seeing is the world is much more complicated.
Frankly, there is much more workload specification. You know, workloads, whether you're talking about training or inference or large models or medium models, you need different types of compute. We were looking for a way to really, you know, turbocharge, you know, strategically, you know, deepen our relationship with Meta. That's what we announced a few weeks ago, really a 6 GW long-term strategic partnership where we're actually doing a semi-custom GPU for Meta along with all of the rest of the work that we do with them on CPUs and other parts of the system. It was really a vertically integrated discussion in the sense that we started from the workload first and then worked through, you know, what is Meta trying to achieve with their workloads?
What do they see the future of their data center infrastructure look like? Using our, you know, very flexible architecture to design something specifically tailored to their needs, which, you know, really allows us to, you know, increase our footprint in the Meta ecosystem. Very excited about that. I think as a long-term strategic partnership, it enables us to, you know, really build on each generation and frankly, get, even, you know, more tailored to where the workloads are going in the future.
Thank you for that. Obviously a great deal, a lot of enthusiasm for it. There have been some questions around the warrants. Can you talk about the warrants that you issued and how those warrants unlock value for you guys?
Yeah, absolutely. You know, the way to think about it is, you know, as I said, the AI infrastructure ecosystem is at an inflection point where deep partnerships really make a difference. You know, I have to say we have a lot of customers that we work with very deeply across CPUs and GPUs, and most of them don't get warrants. Warrants are a very special instrument that we use for, you know, what I would say are transformational partnerships. When we look at Meta, you know, what we see is a company that truly has, you know, sort of the view of the application stack. They're, you know, they are a foundational model builder. They are betting big in infrastructure. There's an opportunity, you know, not just as a consumer of chips.
I mean, obviously, we'd like people to buy chips. You know, we're talking about, you know, triple-digit billion-dollar deals. Those are great things. But we actually see an opportunity to go much broader than that in the sense that you're actually charting the path for where AI infrastructure is going in the future. You know, the value that's accrued to AMD of a deal like this is, yes, we get to accelerate purchases, which is a great thing. We also get to accelerate our, you know, technology ecosystem, our software ecosystem that accrues benefits beyond just the work that we're doing with Meta, but really to the overall AMD ecosystem. The key with how we've designed, you know, these warrants is they're very, very performance-based. In some sense, both companies are incented to help each other win.
You know, we win when Meta's foundational models are super successful, and they need lots and lots of chips. We are motivated to give them the best infrastructure for their workloads. They're motivated to ensure that our ecosystem is as strong as it can be. There's a very good win-win synergistic partnership. It is a special thing. You know, I think the conversation is around, we want to build a very rich ecosystem. The AI infrastructure world is growing by leaps and bounds, and we have an opportunity to significantly accelerate and align with one of the strongest model builders in the ecosystem. You know, as well as our, you know, deep partnership with OpenAI.
Very similar from the standpoint of, you know, we think the model builders that are, you know, driving foundational models going forward, have the opportunity to significantly, you know, align our roadmap with that, really benefits the overall AMD ecosystem across all customers.
Okay, great. Those two being special relationships, do you anticipate that other customers for MI455 would have a similar warrant structure?
I don't. I mean, you should expect that there are lots of other customers interested in MI450 and MI455. They are great products. I wanna make sure that we start with the foundation of at the end of the day, the product has to be extremely competitive and frankly, leadership for anyone to spend, you know, gigawatts of power on our systems. What we have is, you know, again, lots of great partnerships across the board. I think we have a very, very competitive roadmap. We're excited about where MI450 is positioned, when we look at the landscape. You know, we've always been very optimized for inference.
You're now seeing the growth of inference exceed training, which is what we all expected, but that's a great thing 'cause that means people are actually using the, you know, all of these models to now do real work. We're seeing the growth of agentic AI. All of these things, you know, favor our architecture. When I look at our combination of, you know, CPU, GPUs, networking, rack-scale systems, we really have, you know, all of these pieces coming together. Lots of excitement around MI450, but I would say that the relationships with, you know, OpenAI and Meta are pretty special in, you know, how they are, you know, framed as multi-generational partnerships.
Great. Well, thank you for the overview of that deal. Maybe we could delve a little bit more into your AI products. You know, starting with the foundation that you had, you've done really well with MI300, MI350. You know, those have been leadership products that have gotten you to several billion, you know, per quarter now, you know, over $2 billion per quarter. Now you're doing big investments into rack-scale, Helios, the ZT acquisition. You know, what's different between where you've been and where you're going within this?
Well, I think we've made a lot of progress in the data center AI space. I think with each generation, we, you know, really increase the capability and the set of workloads that we address. I think MI300 and MI350 were great opportunities for us to really optimize, let's call it, infrastructure for inference. I think our inference capacity and capability has been really exceptional, and we've seen that adoption. We've also been very focused on building the software ecosystem.
The idea of we wanna make it super simple for people to adopt AMD technologies, and we've gone from, you know, let's call it in the early stages of MI300, it might have taken a number of months for customers to optimize to AMD, and now we're at the point where, you know, you can do that in a short number of days. The tools are that good. I think the libraries are that good. Frankly, we're using AI extensively in that in that ecosystem building. When you go forward to MI450, you know, that's why this year is so exciting for us. It really is a huge step function in capability. It's something that we planned.
We acquired, you know, ZT Systems because we believed that, you know, the rack scale infrastructure, the whole goal, if I think about, you know, the large investments that it takes, in AI infrastructure, is to, you know, get our systems in the hands of users running workloads in as short a time as possible. It's really, you know, time to workload. You know, for that, the more we can do for the customers in terms of, you know, the full solution building, the easier it is for, you know, customers to deploy. That's what we've done with the MI450 series. You know, we've taken our view that an open ecosystem is a good thing. The Helios rack is actually based on a standard that we developed jointly with Meta.
What that allows us to do is, you know, really leverage that entire, you know, rack-scale system. You know, the ZT team has been very, very active from day one as they joined AMD, really building that rack-scale system infrastructure. When we look at MI450 today and the progress that we're making, it's just looking really, really good in the labs and, you know, running lots of workloads and, you know, working very closely with our lead customers.
You've made some really interesting investments into rack-scale that I know ASIC competitors, for example, aren't gonna be able to make. You've made laid that foundation for MI455. I guess, can you just give us an update? You talked about working well in the lab. You've talked about revenue in Q3, a bigger volume ramp in Q4. I know your competitor started building racks, and there were challenges in the beginning. It took longer than they thought. You know, what's your confidence in sort of the ability to have silicon out in time and then meet those timelines?
Yeah. Well, we should start with these are very complex systems. I will be clear with that, Joe. I think we've done all of the planning and a lot of, let's call it risk mitigation in terms of building the rack-scale system. Even before we had final silicon in place, we were validating the rack-scale system. We've had a significant amount of cycles that are now being run. We've learned a lot from the ecosystem. You know, frankly, our partners have also been very helpful in, you know, sort of some of the early teething pains with rack-scale systems, have, you know, given us a lot of feedback.
We've designed Helios with, some of the, let's call it the previous issues in mind so that we do think it is going to come up smoothly. You know, no question that, you know, we have a lot of work to do, but we feel very good about, you know, sort of the steps that we've taken. You know, the most important thing is to be running workloads on these systems, and that's really what we're doing now. We feel good about our positioning. I think we have all the pieces in place. I think we have a strong set of relationships throughout the ecosystem to ensure that, you know, Helios ramp goes very smoothly.
You talked about this as a leadership product. You know, is that really leadership everywhere, leadership in training, leadership in inference? You know, it doesn't seem like you're attacking some segment of the market that your competitor isn't. You're really going right at the center of the market.
Look, the MI450 series is a very general purpose, you know, capability. I think the way we've designed it, because of our chiplet architecture, it is quite special in how we put it together. You know, what chiplets allow you to do is optimize for different workloads. You know, if you look at, you know, our standard products, we've always had an advantage in memory and memory bandwidth. I think we're gonna continue to do that. Those are very, very important when you're talking about large-scale distributed inference and capabilities there. We've also, in the same family, designed a HPC specific part, so our MI430 series. You know, the reason I mention that is, there are going to be these workloads that require different data formats and different capabilities.
Because of our chiplet architecture, we can fundamentally mix and match different components that allow us with, let's call it very incremental work to get very significant workload benefits. With, you know, Meta, we took that to another level to do, you know, customer-specific optimization. We're bullish. I mean, we're very bullish about the positioning of MI450. I think it's the right time. It's the right product. We have, you know, the customers who are anxious to get it in their data centers. You know, we're now planning You know, you can imagine when you're planning, you know, multi-gigawatt deployments, we have to be planning together with the data center build-outs that are happening. You know, it's exciting to see all that come together.
How do you think about the positioning versus custom silicon versus ASICs? You know, you talked about some of the customization capabilities you can provide, but it seems to me that this is not that spread. We're all sort of focusing on the same types of workloads. I don't know that we have the same role for ASIC customization than we've had in the past. Yet the two customers that you have big deals with, you know, have deals with NVIDIA, have deals with ASIC, have deals with AMD. How do you see all that interrelating with each other?
Yeah. It's, it's a very good question, Joe, and, you know, maybe we can take a minute to, you know, kind of break it up in a couple pieces. You know, let's start with the workloads. I think what we're seeing in the market and, you know, what is clearly the next phase of AI infrastructure is there is, like, no one chip that does everything the best. You know, it is a heterogeneous world out there. There's actually a continuum of capability, you know, going from, let's call it the largest training clusters to inference to, you know, more specific inference workloads to, you know, even breaking up the workloads. I think this is a natural evolution. When you get into, you know, high volume running AI workload, you want it to be as efficient as possible.
That efficiency comes from, you know, performance, but it's also performance per watt. It's also performance per watt per dollar. At the volumes that, you know, these hyperscalers are running at or these large foundational model companies are running at, you're gonna want to do that optimization. I think what we've always believed is that in that continuum, our portfolio plays really, really well. You know, we're seeing a significant CPU demand, frankly, as a result of the inference demand picking up. We're seeing significant demand for our standard product, but we're also seeing this continuum where we can do, you know, customizations for specific workloads. Frankly, you know, I think there is always a place for ASICs as well for some more, you know, tailored applications.
The key is You know, we wanna get sort of the best of both worlds, right? You want to be able to have flexibility and time to market. That's what we believe our chiplet ecosystem does and our overall, ecosystems investments do. You also wanna be able to tailor for specific workloads.
That's kind of why we really believe that, you know, this world is going to come to a place where you do have different chips that are being optimized for different workloads and the capability that allows you to optimize the quickest, where you get, let's call it maybe not full tailoring or, you know, full ASIC, but you're able to get, let's call it 80%, 90% of the benefit at a shorter time with, you know, similar economics, is a great thing.
Great. Thank you for that. Can you talk about the systems level things that you need to provide? I mean, in networking, you're scaling up with UALink, but there's sort of UALink through Ethernet. There's a CPO migration to think about. Can you just kind of talk about your networking roadmap and how important is that to the AMD rack-scale roadmap going forward?
No question, networking roadmap is very, very important. What we're trying to unlock is systems performance, you know, systems performance includes, you know, all the elements of compute as well as the networking infrastructure to scale up and scale out. I think we have a, you know, a great team internally, a part of the acquisition of Pensando that we did. We've done quite a bit of work on understanding the networking workloads. We have our own scale-up NIC as well. We work across the ecosystem in terms of, you know, some of the switching partners.
I think the key for this is, again, to, from our standpoint, it's about open standards, and it's about giving the customer choice. So I think UALink is, you know, a very specific, you know, AI-optimized network that we believe can be beneficial. We also believe that there's a large set of customers who gravitate towards Ethernet because of its compatibility, and so we support UALink over Ethernet. We'll continue to support that Ethernet ecosystem. You know, the key for us is to be, you know, very mindful about rack-scale infrastructure performance and capability. Lots of optimization on both the hardware and software ecosystem. I think we're deeply partnered across the ecosystem to deliver that, you know, rack-scale performance.
Great. Thank you. Before you had the Meta deal, you had the OpenAI deal, which was, you know, the same basic size, 6 GW. You know, everything the same with that deal. I know NVIDIA's kind of moved from a little bit more focus on provisioning their own data centers versus what you're doing, which is more cloud-centric. Is that OpenAI deal tracking to what you thought it would?
Well, I have to say, first of all, I think our relationship with OpenAI is, you know, as, you know, better than it has ever been. I think our strategic relationship was, you know, definitely, we're much more tied in terms of roadmap. We're actively planning, what are the installations of the first gigawatt of capability, and it's really playing out as we expected. I would say nothing has changed with the overall deal structure. I would say that I am quite pleased. It's actually clearly paying off in terms of the technical alignment that we have. You know, the pre-work that we're doing across the MI, you know, 450. You know, we are basically co-validating together. We're planning those installations together. Yeah, we feel great about that relationship.
Okay, great. You've talked about this as a $1 trillion market end of the decade, and you've talked about $120 billion of AI revenue for AMD. You know, I guess the market seems to be concerned about the sustainability of the strength we're seeing now. People look at the hyperscale cash flows as being sort of, you know, neutral to negative. The market's kind of understands that things are strong near term, but worried about the duration.
Yeah.
What's your view on that? You continue to believe in $1 trillion. We talked backstage, you know, there's a lot of indications of that. What gives you the confidence in the sizing of that market?
Yeah. Look, we feel really good about the market. It's solving real-world problems, and that's what we see. I mean, we see that the investment in AI infrastructure, in some sense, we're equating that investment with productivity and intelligence, and that's a great thing. Yes, we are all investing ahead of the curve, but well within the reason of where we think the payoff is going to be. I can just tell you, like, every week, every month, we are seeing significant new enterprise use cases that are showing the payoff of what AI can give us. You know, as I talk to enterprise customers, like, we're still in the very early innings of deployment. All of the infrastructure that we are building out, and, you know, really these are planned builds, right?
If you think about, you know, CapEx discussions today are planned builds, you know, later in 2026, 2027, and beyond. They're really to address that enterprise demand and, you know, really delivering the payoff of AI. We are seeing it. We are seeing the early signs of it. We're seeing the early signs of it in our own business. We're seeing the early signs of it in our customers' businesses. I think the thing that's a little bit different, Joe, which, you know, maybe people need to understand, is it's just not all about GPUs. Like, this is not just about deploying accelerators. This is actually about deploying the entire compute infrastructure you need to service, you know, all of those agents that we're all gonna be spawning with our new AI capabilities, right?
If you think if a company has 10,000 people, and they add, you know, another 10,000 agents on top of that, they're gonna need a lot more compute to satisfy what all of those agents are doing, and we're seeing that. We're seeing actually, as much as, you know, I'm very, very excited about the GPU portion of the business, I mean, the CPU portion of the business has actually far exceeded my expectations in terms of demand. I was pretty bullish to begin with, right?
Yeah.
We talked about, you know, I talked about like a high teens CAGR in the compute market at our Analyst Day. I can tell you that every indication that I'm seeing today is that that compute market is even much larger than that. The ratio of traditional compute to accelerated compute is such that you need really a very balanced, you know, system overall.
Yeah, that seems like we've moved from theory to seeing that play out in real time now.
Did you not believe me when I said that, Joe?
I believed you, but we certainly are seeing evidence of it now. you know, can you talk about that? It seems like the microprocessor market is dealing with shortages at the moment. you know, what's your visibility into being able to meet that demand that's out there?
Well, first of all, we have a very strong roadmap. I think we have executed very well. You know, as we ramped Turin, it was a very, very fast ramp. With each generation of our EPYC processors, we've actually increased the workload coverage. You know, we started with, you know, let's call it, you know, the main cloud workloads at the hyperscalers. We've now really expanded to the breadth and depth that you would expect-with strong products, and that's across both hyperscalers as well as enterprise. I think what, you know, we're seeing is really that build-out, you know, continue in a very positive fashion. Back to your comment about, you know, are there, is it supply tightness? Yes, there is supply tightness.
That's really because the market sizing is bigger than what we had forecasted, you know, three or six months ago. It always takes time for the supply chain to catch up with what the market wants. I can say that, you know, we are very, very well positioned from a supply standpoint to meet a large percentage of that demand. We are still working very closely with our supply chain partners to expand that capability as we go through 2026 and 2027. What we're looking for is, again, you know, durable demand that is not just, "Hey, are we just catching up because we haven't upgraded CPUs?" No. I mean, that is the wrong way to look at it. Yes, you know, we are upgrading-
I said that though.
You did say that too, didn't you?
Yeah.
No, Look, I mean, I think we never know until we actually see what happens in the workload. I would say even the hyperscalers are surprised.
Yeah.
If you talk to our top customers, they're like, "Wow, you know, Lisa, the, like, the demand for CPU compute sitting along AI was perhaps something that was under-forecasted." We are in the process of catching up. I think it's a great time. You know, it's a great time because, you know, one, we were already, from an AMD standpoint, expanding our workload coverage, and then two, you're seeing the customer demand really strengthen as well. We will continue to increase our supply coverage as we go through this year and into next year.
This definitely feels like a very durable cycle, and it's a very pleasant I won't say surprise, but it's a pleasant development as we think about our overall goal is to provide the right compute for the right workload. I think our data center business, you know, clearly shows that we have all the right pieces for this AI cycle.
I know Forrest had talked recently about, you know, sort of Turin versus Granite Rapids as close as this is gonna get, and Venice is a clear indication of AMD pulling ahead. Can you talk about that? I feel like, you know, I guess your competitor would say, "You know, we have fabs, we can meet this demand," whereas AMD may be constrained on wafers. Any concern about that?
I would say from a competitive standpoint, we feel really good about Venice. I think we continued to be very aggressive with each generation of our CPU build-out. We continue to broaden the workloads that we're covering. you know, Venice was one of the very, very first products in TSMC 2-N anometer, using our chiplet architecture. It is on track to ramp very nicely in the second half of the year. I think we feel very good about our ability to expand to the demand out there. I would say what we're seeing about Venice, which tells you a little bit about the competitive position, is, you know, each generation, what we're trying to do is align customer ramps with our ramp, right? We want customers to have the best technology that they can.
Frankly, that doesn't always happen because customers have their own cycles that they're going through. With Venice, like, every one of our large customers wants Venice the moment it comes out. That kinda gives you a sense of how good it is because if you have power to spend, you wanna spend your power on the best technology out there, you know, that's what Venice will be when it comes out.
Great. Then on the CPU side, can you talk about competition within ARM? You know, your bigger hyperscale customers do have some ARM deployments that are out there. Just how do you see them fitting into this ecosystem?
Look, I think ARM has always been a part of the data center ecosystem. I would say it tends to be on the lower performance, you know, side of it. We view it as not about ARM versus X86. We view it as you want the right processor for the right workload, the performance per watt capabilities, performance per dollar capabilities, the overall TCO are what's critical. You know, we think with the broad coverage that we have as we go into the Venice generation, you know, I see our TAM expanding, and I see our share expanding because of the capabilities of Venice.
Okay, great. Thank you. I have one more question, and then I'll turn it to the audience. The role of memory in all of this, are you seeing impact of memory shortages on the GPU side, on the CPU side, any part of your business at this point?
It's, it's a dynamic world right now. I think when you look at the memory, you know, market. First of all, we plan with the memory vendors, you know, many years in advance. We've been planning for the MI450 ramp. We're planning our HBM4 ramp across the memory ecosystem. We feel good about where we're positioned from an HBM standpoint. There are other, you know, knock-on effects on the memory market right now, certainly if you talk to any of the memory vendors, in terms of where DDR4 and DDR5 are positioned, as well as, you know, some of the consumer grades. The impact that we're seeing is, you know, certainly the memory prices are affecting system prices. You see system prices going up.
I will say that the enterprise demand on the data center side seems again, very durable. I think people are wanting to compute, needing to compute. Although they're paying a bit more than they might have six or nine months ago, I think that is, you know, that is the main impact. I am watching the impact on the PC market. You know, we would expect that there might be more, you know, sort of cost pressures, and as those cost pressures, they may change a little bit the PC market dynamics. Whereas our overall sell-through in the PC market is actually, you know, quite good, we are expecting that in the second half of the year we may see a more muted, you know, part of the market just as memory prices are volatile.
We'll have to see how it plays out. I mean, at the end of the day, you know, the one thing about the industry is we tend to like demand, and we tend to like fulfilling that demand. I think there is a lot to still play out. On the data center side, it is full speed ahead, and, you know, we'll have to see what happens in the consumer markets.
Very helpful. Thank you. Let me see if we have any questions from the audience. In the front.
Morning. Thanks for taking my question. Following OpenAI and Meta, can you just talk about the propensity for other, kind of gigawatt scale deals with other hyperscalers, AI lab types? Thank you.
Absolutely. Look, we are very ambitious with what we can do in the data center AI market. I think from a roadmap standpoint, as much as we're excited about the MI450 series, we're actually super excited about what's beyond as well. You know, these days, I think there are multiple gigawatt scale customers. I think, you know, every lab is looking for choice at this point. You know, this idea of diversity of compute is important. With, you know, Meta and OpenAI, I think we've built foundational multi-generational deals that will absolutely help pull the entire ecosystem and enhance the entire AMD ecosystem.
I think there are a number of other customers that are, let's call it in that scale, that we see as strongly interested in MI450 and beyond. Back to this comment of what are we trying to do. If you think about the ambition that we have in the data center AI segment, it is a very, very large TAM, and we are currently at the very early stages of building out our business. What we're trying to do is accelerate it. We've talked about growing over 80% CAGR over the next three-five years in our data center AI segment.
I think with the visibility that we have with some of these large deals, as well as the other broad, you know, the broader customer set, I think we have, you know, a very, you know, good confidence to not only meet those but, you know, exceed, you know, those targets as we go forward.
Question.
Hi. Thank you. Can you comment on the status of the Chinese market opportunity and maybe also competition from China?
Sure. We've always stated that the Chinese market is an important market to us. You know, we are, you know, we have a broad set of customers in the Chinese market on CPUs, in other areas. On the GPU side, it is still a little bit complicated. We were able to ship some MI308s last quarter, in the fourth quarter that we reported, and we talked about, you know, approximately $100 million this quarter. We're in the process of applying for licenses for the next generation of the MI325 chips. I think the Department of Commerce and the U.S. Government are still going through the approval processes for that. It's very, very hard to predict, and for that reason we're not forecasting additional revenue going forward.
We would certainly like to satisfy our customers in China. I think there's a lot that we learned by participating in the Chinese market because they also have a set of models that are somewhat different than the U.S. models and, you know, we want to be able to service that. We'll have to see how, you know, that whole environment plays out from a licensing standpoint over the next, you know, couple of months.
Do those limitations kickstart Chinese competition to some degree?
I think Chinese competition was always gonna be fierce. We should expect that in a world as competitive as AI, you know, we have to give the Chinese, you know, chip providers credit for what they're able to achieve. You know, that being said, I think, you know, the roadmaps that we have from, you know, a U.S. technology standpoint are very, very strong. We want to be able to participate in the global market and, you know, we need to, you know, continue to work with, you know, both governments to enable that to happen.
Last question.
Hi. Thank you. There seems to be a debate out there as to whether you're able to ship, rack-scale solutions in volume in the second half of this year or, I mean, do you have enough CoWoS capacity to do that or is this much more of a 2027 story?
We definitely have enough CoWoS capacity. I know that, you know, there's lots of people trying to check various things. The best thing I can tell you is we have the capacity, we have the technology, we have, you know, the deep customer relationships, we have the data center, you know, providers have allocated space for it, so we have to execute that ramp. We've always said the ramp is second half weighted. Think about it, you know, a little bit in Q3, but really ramping sharply, as we get into Q4. You know, this is no question, a very, very important ramp for us. It's something that, you know, we've been planning for, you know, many quarters and we feel good about the ramp.
Great. We'll wrap it up there, Lisa. Congratulations on everything you've achieved, and thank you so much for being here today.
Thank you.