Advanced Micro Devices, Inc. (AMD)
NASDAQ: AMD · Real-Time Price · USD
421.39
+66.13 (18.61%)
At close: May 6, 2026, 4:00 PM EDT
416.27
-5.12 (-1.22%)
After-hours: May 6, 2026, 7:29 PM EDT
← View all transcripts

Morgan Stanley Technology, Media & Telecom Conference

Mar 3, 2025

Speaker 1

Please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales rep. So with that, very happy to have with us today from Advanced Micro Devices, Jean Hu, EVP, CFO, and Treasurer, as well as Matt Ramsay, who newly joined the company. I don't know your title, head of IR and other things, right?

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

That's a good,

that's a good, yeah. Okay.

I'll take that. Another thing is, yeah, sure.

Anyway, so welcome, you guys. Thank you so much for joining us. I think we gotta start on AI. I think it's a rule. But, you know, you guys had a great year one. You did, you know, over $5 billion of revenue for a product from a standing start, you know, that people got for silicon, and you did $5 billion in a year. Tremendous, you know, people want more always, but like, you know, a very good year one. Can you talk about what was required to do that and how much visibility that gives you? It doesn't seem like people buy a point product. They're buying a roadmap, so obviously that $5 billion speaks to a certain amount of confidence. Can you talk about that year one experience?

Jean Hu
EVP, CFO, and Treasurer, Advanced Micro Devices

Yeah, absolutely. First, thank you for having us. This is a great conference, and thank you for the question. You know, 2024, it was truly a transformative year for AMD. As you mentioned, we actually ramped up MI300 really quickly and exceeded, you know, $5 billion. Both the MI300 and our ROCm software, right now are powering some of the most complicated AI models, right, at Microsoft, at Meta, at scale. So it's a lot of progress we made, and the team executed flawlessly, actually, to ramp the production of MI300. At the same time, we also made tremendous progress in our overall hardware roadmap and the software stack. If you think about the hardware side, not only we introduced MI325, we also pulled in this year MI350, and next year we're going to be on track for MI400.

On the software side, we made a lot of progress with ROCm to really support all different applications and the workload. We also acquired ZT Systems to build our system expertise to build, you know, rack level and cluster level designs, so overall, the progress made and our engagement with the customers also broadened quite significantly, not only we add new customers and with existing customers, the engagement has always been about multi-generational. It's not just about generating MI300 revenue. It's about the discussions for MI350 and MI400, how we help our customers to build clusters to provide better TCOs, so the way to think about it is, MI300 is the very, very beginning of our journey, and each generation, if you look at our product roadmap, we get better and stronger, more competitive.

Just like how we build our server side of the business, over time, we want to be a strong player, gaining market share in this market. When you look at the market opportunities, we do expect to be like $500 billion, 500 billion large market opportunities. Based on the execution we have so far and how we are very well positioned, we do believe we can have a growth trajectory to tens of billions of dollars in annual revenue in this market.

I do wanna dig into some of the roadmap stuff, but maybe first conceptually, you know, if we could talk about AMD versus ASICs principally is my focus. You know, I think NVIDIA's pretty good at this. We all know that. But everybody's looking for an alternative. Nobody wants to be sole-sourced. Everybody's looking for.

Ways that they can generate ROI that NVIDIA can't give them. I think when we were at this conference 12 months ago, AMD was clearly the answer to that question in people's mind. Now people are talking a lot more about custom silicon and ASICs, and I guess at the same time, you know, you guys have a lot of visibility with customers. You have a lot of customers who are still making investments in AMD. Just how do I think about that trade-off between, you know, the visibility that the ASIC vendors have versus the visibility that AMD has?

Yeah, this topic is definitely top of mind for many investors. Maybe let us share our perspective. I'll start, Matt, you can add. AMD is a high-performance compute company. If you think about our strategy and what we've done, is build a platform, not only we have a CPU, GPU, and also FPGA, we also do custom silicons, actually. So our view has always been different workload, different application require different compute engine to get the best economics. When you look at the AI models, it has been evolving rapidly, and model innovation literally accelerating every week. At the same time, you can see the AI scaling continues, right? You have a, like a pre-training, post-training to test time compute. All those things require different compute, different compute engine, different compute requirement. GPU is really built for support a wide range of models, right? It's flexible.

You can program it, and you can tune it for the performance over different model changes. So from TCO perspective, that is how you really think about it, is the total cost of ownership, when customers are doing either training and inferencing. It's about the cost per token and the cost per flop. We are working with our customers closely to really build those large infrastructures. That's how they are thinking about it. On the other side, ASIC can be efficient if the workload is very specific, very stable, and then you can really design it for the specific models. And at the same time, you know, there's very large-scale deployment. And the ASIC also takes time too. That's why you probably are saying is, oh, there's 18-24 months time. ASIC has visibility. It's actually very similar for us, the engagement with the customers.

When you think about those one-gigawatt data center you need to build, the lead time is really data center space, power, and all those things. We have to work with our customers closely, to design the overall infrastructure there. So to a certain degree, what AMD is focusing on is we think this is the vast majority of the market in the longer term, especially when you talk about a $500 billion market opportunity. That's where we see we can get the highest return on the investment we are making. Matt, anything?

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

Yeah, Joe, I think I would also add that it's pretty easy to think about one way to generate TCO at the data center scale is to have an algorithm and then design specific silicon for that algorithm in an ASIC, and have lower-cost hardware upfront. Like, that's a pretty obvious way to try to generate TCO, but less dollars upfront for the same computing. Another way to generate TCO is to build programmable GPU-led infrastructure that can rely on the industry's innovations and software over time to drive better TCO and better ROI of the infrastructure that you've already put in the ground because it's programmable. And I think, over the last month or so, we've sort of all witnessed the market's reaction to DeepSeek.

But to us, DeepSeek was got a lot of attention because it was in China and a couple of things that they claimed on cost. But it's a pretty natural thing for an industry to start, as the install base of hardware grows, to start doing really rapid innovation in software to get better TCO of a fixed function of an infrastructure that's already in place. And if your infrastructure is programmable, you can benefit from that innovation of the software stack of the industry over a long period of time and over the depreciable life of the infrastructure you put in the ground. And I think that that's what gives us the conviction that programmable infrastructure is the way to go for the majority of the TAM. There's certain applications that ASICs are very well suited for.

And some of the folks in the market talk about those a lot. But I think over the breadth of workloads and over the fullness of time of software innovation, I think there's a lot to be said for programmable infrastructure. And that's where our customers are pulling us, and that's where we're pushing to bring increased competition and capabilities over time.

Can you talk about the role of the price for the hardware? 'Cause I think, you know, and you just mentioned it's you can do something cheaper, purpose-built. But if you're competing for that broader range of programmable workloads, you know, if this were as easy as doing $5,000 cards, a lot of people would be successful at it. You know, you at NVIDIA both have expensive cards. Even within your stacks, it seems like a lot of the business gravitates to the highest performant part. So can you just talk about, you know, the role of price in all of this?

Jean Hu
EVP, CFO, and Treasurer, Advanced Micro Devices

I think, to compare the price of data center GPU with the ASIC is probably overly simplified. It's probably more nuanced when you think about how customers think about the TCO, right? It's, as Matt mentioned earlier, it's for the TCO, they measure cost per token, you know, cost per flop. And when they think about it, they think about the overall how many years they are going to use the infrastructure and how many different model, different workload they can run on that infrastructure for those years. On the ASIC side, right, it's, if you want to talk about TCO for economics for any ASIC investment, there's very large upfront R&D cost. That's not part of the ASP, but it's part of the investment.

And also, if you take time to invest, you also need to consider the risk of obsolescence because the model tends to be very fixed. It could be like one or two years. You have to turn it around. So when you compare at that point, it's really the upfront ASP is not a direct comparison. I think when we look at our customers, how they calculate the total cost of ownership, they do consider all the different factors there. And fundamentally, it is about when it's programmable, you can tune it, you can get the best performance over time versus when it's for very specific applications, you get better efficiency if you know exactly the scale and the application you want to use.

Great. And I guess before I get to the roadmap, I mean, the ubiquity of AMD, you can be at several different cloud vendors. You can make investments into the software ecosystem that can benefit all of your customers, can benefit a wide range, people that don't wanna be locked into a single cloud vendor for their workloads. AMD's gonna have that appeal. You know, how important is that, and how does that inform the acquisitions that you're doing around, you know, software acquisitions and ZT?

Yeah, yeah. Thank you for the question. I think that's a very important part of AMD's strategy. If you look at the ROCm software stack, it's open source. We actually not only, you know, all the, the frameworks, PyTorch, Triton, JAX, and all different things support completely. We also work the overall ecosystem. And you can run it in Microsoft, you know, you can run it in other CSPs to really help the customers, no matter which framework you use to write your model. That is part of our strategy, actually, is to help customers to really, for easy over-deployment of AMD models. And you should expect us to continue to drive really aggressively, to have that ecosystem build up. The software acquisitions we did is to add more capabilities and broaden our customer engagement so we can support more customers.

I mean, the ZT in particular seems like a lot of work, right? I mean, you're buying an ODM, divesting the hardware portion, keeping the engineering resources. You know, what is that getting you that in return for the work that you're doing there?

I'll start, Matt can add. Is ZT acquisition actually a very important step in our building our capabilities. It's about adding design capabilities for rack level system level to help customers not only just build small clusters, but very large clusters for both training and inference. It is a lot of work, but when you think about it, it's not only we're going to keep the very large design team that will help us with the capabilities. More importantly, we'll have a strategic partnership with the manufacturing operation, which we are seeking strategic partners that will help us to speed time to market, right? We'll work closely, help our customers to build different clusters, systems. That's really very critical in today's market. Matt?

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

The only thing I would add, and Jean, you covered it well, but the only thing I would add is as we bring the ZT design team into the company and it has significant influence on our MI400 generation product in 2026. I think you'll, we've learned a lot of sort of watching what's happened in the industry over the last 12-15 months, in terms of pulling together rack-scale systems. And I think what you'll expect to see from AMD is a much less prescriptive approach to system design partnership with our OEM and ODM partners from a reference design perspective. And every large AI company, every hyperscale company, their data center infrastructure is not ubiquitous. It's not the same.

The what one customer might want for their data center footprint, one might be very different from what another customer wants. And so being able to have rack-scale reference design that's not prescriptive but has some flexibility and customization for individual data center builds by customers is what we're gonna be bringing to the table, as we move forward with the ZT team integrating into the current team that we have inside the company.

Good stuff. So maybe we could talk about the roadmap a little bit, starting with MI350 this year. You know, how game-changing is that product? Are you gonna get new customers, replacement customers, or, you know, obviously, people are gonna migrate to 350, but how important is it from generating new demand?

Jean Hu
EVP, CFO, and Treasurer, Advanced Micro Devices

Want to start?

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

Sure. It's an important cost. It's an important product for the company. There's some significant new capabilities in networking, in memory capacity and addressability across a cabinet, around data type utilizations down to FP6 and FP4. A lot of work that's done in the ROCm software stack that will be introduced alongside MI350 to take sort of higher-level models and map them onto the underlying topology of the hardware. I think it expands our performance levels significantly when it comes to large model inference. I think Lisa's been pretty public about 35 times or up to 35 times performance gains for inference, and expands sort of the aperture of the training capability of the Instinct roadmap to sort of tens of thousands of units and training clusters.

And then it gives us something significant to build on in the 400 generation for frontier-level training models. So it's a. We were really, really excited to be able to pull that product in to be able to launch in the mid-year. I think that was a few months earlier than most of this audience might have expected. I think we're anxious for that to get going. The customers are anxious for it to get going. Yeah, we have talked about bringing in additional sort of lighthouse accounts into the Instinct portfolio as that product launches. I guess more to come in the middle of the year as we officially launch the program.

And then with MI400, you know, you alluded to a lot of the rack and cluster-level benefits that you bring. You know, is the frame of reference there, MI400 versus Rubin? And, you know, obviously hard to talk about future products from different companies, but just, you know, is this the point where you can kind of take a much bigger role in training, things like that, your confidence level in that based on what you know of your competitor's roadmap?

Jean Hu
EVP, CFO, and Treasurer, Advanced Micro Devices

Yeah, absolutely. I think the way to think about it is MI350 is more compatible with Blackwell. MI400 is really to compete with Rubin, so each generation, we are doing much better. Once we get to MI400, we do feel we have a very competitive product portfolio and support rack level, system level, and cluster level buildup. That is, we have not shared a lot of details yet. But that's the plan is to really drive more competitive product roadmap there.

Okay. And I do wanna spend time on the other 80% of your business, but I think we still have to ask a couple of other AI questions. I guess tens of billions, you know, that forecast that you have tens more than $110, but tens of billions of revenue potential around these products. You know, presumably that is something that you're saying based on conversations you're having with customers about the opportunity that you have. What has to happen for you to achieve those kinds of numbers?

Yeah. First, we have to execute our roadmap, right? Not only hardware roadmap, software, and the ZT Systems integration, to make sure we continue to drive all the execution flawlessly. Secondly, it is a very strong customer engagement. When we engage with our customers, it's not about just generate revenue currently. It's always about roadmap discussions, the feedback from customer, how we can provide the best TCO for customers. And, as, we all know, it's, the build cycle is quite long for those large cluster and, data center. You actually need to figure out the power, the space, and everything else. So those are the important things we need to work with our customer, partner with them together. ZT Systems is a very important part of this equation. Will help us to, speed up time to market to be able to support our customers.

Great. And then last AI question for me, the export controls. You know, if we do get in, we have rules that are supposed to go in place mid-May. We don't know if those will be the final say. Just how is AMD positioned to deal with potential government export controls?

Yeah, we're monitoring it very carefully. It definitely has been a topic everybody is really focused on. So for us, that's the number one objective. We need to meet the export control criteria. We do think there are a lot of opportunities with sovereign AI in China. There's a large market too. We do think those are the opportunities we want to address going forward, but of course, complying with the export control.

Great. So I'd like to pivot and talk, ask you about, some of the x86 businesses starting with servers. You guys have done well in the server market in a market that's been tough, right? We've seen a lot of this AI investment has caused people to actually go as far as lengthen the depreciable lives of their servers. So when you guys have the highest market share with the people who are doing that, it seems like it's a headwind and yet you've grown pretty nicely. What's your visibility to that? I mean, you've talked about the staleness of the data center ecosystem server-wise. When do you think we might start seeing more refresh business?

Want to start?

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

Yeah, sure. I think there's a couple things, right? When there was a period of time where there's this mass CapEx pendulum swing towards AI. And we all sort of witnessed what that looked like in 2023 for the server market. More recently, some of the limitations of folks that actually want to add AI hardware are data center space. And if you start to look back at, there's a lot of compute infrastructure in the data center that's still two or three generations old, maybe four generations old of CPU, that upgrading those into our Turin platform, given either Turin Dense or Turin Classic, that can get you significantly better data center footprint, usage. In addition to some of the work that we're doing on head node for GPU clusters, I think we feel really good aware that server business is Dan's business.

Dan McNamara, who runs that business internally for us, has really, really good product up and down the stack from Turin Dense that's a sort of a direct Arm competitor in a lot of instances that's built on top of the Bergamo platform that's been really successful to us, to the core count lead that the portfolio has across the board. In addition to the fact that when you take all of this AI work that's being done, it actually generates a lot of need for classical computing alongside of it, so you see that the ability to refresh for space, and I think we gained sort of five or six points of server share last year overall.

I think we're pretty confident about the server share gains to come in 2025 as we move forward and look forward to continuing the leadership position across the whole breadth of the roadmap as we move beyond the Turin generation.

In the enterprise side of the server business, meaning both enterprise OEMs, but also cloud that's servicing enterprise, you know, you guys have had a technology lead now for, you know, six or seven years. Obviously there's still an incumbency advantage for Intel in some cases, but, you know, are you able to continue to break through that? And if Intel narrows the gap, does that make it harder? I mean, do you, how big does the lead have to be for the enterprise market to keep swinging towards AMD?

Jean Hu
EVP, CFO, and Treasurer, Advanced Micro Devices

Yeah, we have been investing in enterprise go-to-market for the last couple of years. That has helped us to make significant progress in the enterprise market. If you look at the last six consecutive quarters, we have been growing our enterprise business year over year. That is really because not only the TCO benefit, as Matt mentioned, is data center space power. Those are the major constraints even for enterprise. So when we can provide TCO to help them to save power and space, we do see if we show our customers our TCO performance, they will switch.

There's actually a very, very clear choice over there. It is just we need to get to different enterprise customers, have the go-to-market engine to help them to do that switch. So overall, we feel pretty good about generation over generation, not only Genoa, but Turin. We actually have a more platform design with Turin because the workload application we actually brought support for all different workload and application with our Turin platform, so we do think we can continue to gain market share in enterprise market.

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

Yeah, Joe, the only thing I would add too is that, on the enterprise side, we, for the first time, I think, despite having, as you mentioned, product leadership for a number of generations, it's, the, the shift of enterprise CIOs moved more towards what the heck am I gonna do with AI, right? And that's where the board pressure came from. That's where, in the recent months, it's, it's kind of swung back a bit. I mean, AMD has, has progressed not just as a technology leader, but now as the safe vendor of choice. As you look forward to plan your infrastructure over the next one, three, five years, what vendor do you want to really rely on for your infrastructure that has presence in all the clouds for overflow, and a multi-cloud strategy, but also just continuation of roadmap execution and stabilization of roadmap?

That's become. It was really top of mind for a, what, seven, eight, nine-year period. And then when AI came, it kind of went down the priority list for CIOs, and it's quickly kind of popped back up both from a space and power constraints perspective that's being discussed, but also just from a continuity of execution of roadmap and supply. I think it's been a, the environment's changed a bit. And I think Dan's business is positioned well for enterprise over the next 18 months.

Great. And then shifting to client, really impressive performance there also. I think you grew 58% year on year last quarter, with a seasonal outlook. Now your competitor called out that they thought there might be a tariff pull forward in their numbers. You know, so how good, how do you feel about sort of market share in that space? Are those numbers entirely share-driven and your visibility on that continuing to improve through the year?

Jean Hu
EVP, CFO, and Treasurer, Advanced Micro Devices

Yeah, we are very pleased with our client business performance. It has been really primarily driven by strong product portfolio. So if you look at not only desktop, but notebook side, we have the best lineup of product portfolio. Our Ryzen 9000 desktop processors have been sold out, like really every channel, a lot of retail channels you actually can see we get to 70% market share. So sell-through has been really strong, not only in Q4, but post-Chinese New Year. We continue to see strong sell-through because the gaming performance and the advantage, the user experience we can provide to customers. On the notebook side, the Ryzen AI 300 processor have been really successful. We have 150 different platform design wins, almost doubled Intel's similar design platforms. So that is the one, not only we have the best CPU, best GPU, and the best NPU.

The combination really helped the customers on the gaming experience side, on the productivity side, on the user experience. So that helps a lot, and more importantly, if you look at, on the OEM customer side, not only we have a strong relationship with Lenovo, HP, we actually added Dell for the first time, to be our strategic partner to introduce the overall commercial platforms. That will help us to continue to drive the sell-through to really continue to gain market share.

Great. And so I'll have one more segment question and then I'll open it to the audience. Embedded, it's been a challenging period for this, the Xilinx FPGA business. It's been a challenging period for kind of all broad-based market companies in the last year or so. Can you talk about the progress there, you know, and your any visibility that you may have into to growth and embedded?

You're right. It has a prolonged cycle for inventory normalization. We do see some early signs of really gradual recovery. I think the end market that we participate in, when you think about aerospace and defense, as well as emulation, those testing, those are actually quite steady and resilient. You see signs of improvement. Industrial continues to be very mixed from a demand perspective.

Overall, we do expect this year is the year to recovery, probably slowly. But I will say one thing is during this kind of a down cycle, we actually get tremendous design wins because our team's focus and execution. If you just look at the design wins, we actually for 2024, we had $14 billion design wins, which is like 25% increase year over year. That will help us in the longer term when the market really fully recovers. Right now, sell-through is improving slightly. We can see that, so that definitely is going to help us.

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

Yeah, Joe, when I think about the overall AMD financial model, we started the conversation with AI and we added more than $5 billion in revenue for our AI programs in a year, which from a standing start, as you point out, is a heck of an achievement. But at the same time, there were cyclical challenges in a couple of our businesses, right? The embedded business declined significantly and we, I think the industry and gaming as well.

Jean Hu
EVP, CFO, and Treasurer, Advanced Micro Devices

Gaming declined very significantly.

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

And so we had a couple of businesses that were headwinds that masked the top line, some really exciting progress in sort of our core franchises, and I think we feel pretty confident that those headwinds are behind us. How quickly they turn into tailwinds, I'd rather for this audience under promise and over deliver with respect to turning those businesses around, but we feel really good to Jean's point about design wins in the Xilinx business, and we just sort of launched a new gaming GPU recently. So there's some momentum that's starting to build, but at a very bare minimum, I think you'll see the exciting franchises in client and server and data center GPU drive the P&L without sort of the headwinds that have been there for the last 12-15 months in the other franchises.

The gross margins in embedded, you know, the questions come up in the context of your competitor saying that their gross margins are significantly lower than, you know, a few years ago. It seems like there was a discipline of having two public companies that you didn't chase market that were converting to ASICs. You know, it seems like your gross margins are still kind of at the level when you acquire Xilinx. Can you talk to that?

Jean Hu
EVP, CFO, and Treasurer, Advanced Micro Devices

Yeah, we continue to be very disciplined. When you look at the FPGA franchise, we have not only from market share, it's number one, and we are also very focused on mid- to high-end market with aerospace, defense, and also a lot of emulation testing management side. We continue to drive the team to be as disciplined as in the past. They have done a great job, right? It's all about margin improvement, so for us, we continue to see very strong gross margin from FPGA business.

Great. Do we have any questions from the audience? One in the front? Maybe just say it and I'll repeat it.

Yeah, Joe, I just wanted to get a sense of, a lot of investors are looking at ASP per core. Why is that the wrong measure? Do customers look at ASP per core or?

Yeah. The question is on ASP per core. Is that the right way to look at the market?

It's a good way to look at it. If you look at our server business over time, we have been increasing our unit counts generation by generation. So, it's a good way to look at it. If you can keep your unit price largely consistent or constant, you actually, each generation, you actually can increase your overall ASP because you actually provide better performance for your customers. So it is a good way to look at it, but it's actually very, it takes more time to track it for third-party analysts, right? So I think in general, we do look at it that way.

Any other questions from the audience? I guess then I'll close with like, how do you think about R&D dollars? You know, when you think about you're characterizing AI as a $500 billion opportunity, you have tens of billions of revenue potential. You know, I know Lisa pretty well. She's not gonna be happy with 5% share. She's gonna strive for pretty big numbers. Your competitor's spending $16 billion. So how do you think about the need to invest more or to invest ahead of those revenue levels?

Yeah, that's a great question. I think the way how AMD thinks about resource allocation is given the very large growth opportunity, the first thing is we are leaning in in investment. Investment not only on the R&D side, but also on the acquisition side. We do have a very strong balance sheet, a very much under-levered balance sheet, so we can leverage our balance sheet to invest to do acquisitions on software side or like a ZT Systems. That being said, the company's very disciplined and overall focused on innovation. When you think about back 10 years ago or 12 years ago when Lisa and Mark Papermaster joined the company and Forrest, the whole team, they had resource constraint even back then, right? So, the innovation, like a chiplet design, AMD really lead at that innovation.

It's because of the resource constraint at that time the company had. But over time, the team is thinking through is how we can be disciplined, how we can innovate. We co-innovate with TSMC on packaging technology, on the process technology, so to drive our architecture, our overall product much more competitive. That has been the team's execution model. It's we're going to lean in, in investment, but we're going to be very smart, efficient to invest. If you look at our investment right now, R&D investment in 2024, we're doing three generations of data center GPU at the same time, MI325, MI350, and MI400. At the same time, we are investing in our server CPU roadmap and client CPU roadmap and the gaming graphic roadmap. The platform leverage, how team think about investment is really, leverage the CPU platform, a GPU platform, and the software platform.

I'm actually, you know, really impressed by how team think about the, as a CFO, you always want to control the OpEx, but the team has been always very thoughtful to balance driving long-term company's growth versus how we expand the margin. Overall, I will say it's. We'll always expand the revenue faster than operating expense growth. At our scale, that will drive a very significant operating leverage.

Very helpful. Jean, Matt, thank you so much for your time.

Thank you.

Matt Ramsay
Head of Investor Relations, Advanced Micro Devices

Thank you.

Powered by