Welcome to AMD Financial Analyst Day. Please welcome to the stage Matt Ramsay, CVP, Financial Strategy and Investor Relations.
Good afternoon, everybody. Thank you for enduring snow and cold weather, and anyway, happy Veterans Day, even though you guys had to endure the parade, but it's an important thing for us to acknowledge. It's been a very interesting 12 months or so since I joined AMD. We've been two and a half years or so since we've done an Analyst Day, and thank you guys all for coming out and spending some time with us. I know your time is valuable, and we really appreciate it. I get to present the most exciting slide of the day. For you with really good vision, this is our cautionary statement. Turns out the entire point of us having one of these Analyst Days is to make forward-looking statements, so we're going to make plenty of those.
But it is important that you guys take note of the cautionary statement and go look for the risks and uncertainties that come with the forward-looking statements that we make. Those are going to be in the 10Q and 10K filings that we file with the SEC, and you guys know the drill. In addition to that, pretty much every number that we're going to talk about today on stage is a non-GAAP number. In the appendices of the slides and in our SEC filings, there are GAAP to non-GAAP reconciliations of all of those numbers, so please take note of that as you digest the financial information that we give here today. So here is the agenda. I think we have a lot of really substantive things to say that are relevant to the market, relevant to our business, relevant to our technologies.
You guys will get to hear from Lisa introducing everything about AMD and from Mark to talk about the technical roadmaps over the last 10 years and over the 10 years coming in the future. Then Forrest and Dan and Vamsi are going to spend about an hour going through the data center business in detail, and then we'll take a break. The one thing that I would ask you, we're webcasting all this and trying to time it and everything, if you could please take your 20-minute break and be back in your seats when we get ready to kick off again, I would really appreciate it.
After the break, Jack and Salil will talk through both the client and gaming business and the embedded business at AMD, and then Jean will come up with the, I don't know, the financial model that you guys have. I'm sure we'll stick around for at the end, and then Lisa will bring us back to close before we do Q&A at the end. It should be a really, really good day, and we're very thankful that you've come and spent it with us. All right, so now without further ado, I get to do something that's an absolute thrill for me personally, but probably unnecessary for you guys, introduce someone that absolutely needs no introduction to this audience, but there's our Chair and CEO, Dr. Lisa Su.
All right, good afternoon, everyone.
Good afternoon.
Thank you for joining us here in New York, and thanks to everyone joining us online. We're really excited to be here today, excited to see so many friendly faces, and really excited to talk about really what's happening in the market. I mean, it's a really special time for the technology industry, for the technology market, AI overall, but then also it's a very exciting time for AMD, so let me go ahead and get started. What I'd like to do is basically give you a bit of an overview, starting with some of our strategy, some of our execution, and then really what we see going forward, and then have the team really take you through the deep dive, but starting first with our overall mission, I think we've been very consistent here that high-performance computing is the foundation of really everything that's important.
When you talk about solving some of the world's most important challenges, we've actually added its high performance in AI computing because AI is such a large piece of the computing world today, but this is our goal in life. This is what we wake up every day to do, and when we look at where we are today, I'd like to say that AMD computing touches billions of people every day because it's true. When you look across data center, cloud, some of the most important services that everyone uses in enterprises or personal, talk about edge and intelligent devices, talking about all of the different market segments where you need and use computing, you can see AMD throughout all of those places, and increasingly, you see AI everywhere as well.
So that's going to be another theme of our day, is that, yes, we're going to spend a lot of time talking about the cloud and the data center because those are the largest markets, but we're really seeing AI diffuse through all aspects of the business, including at the edge and in the client world, so a little bit about just focus areas and how we got here. When I look back, you say so much has changed over the last few years if you think about how the market has really evolved over the past few years, but actually, for us, our strategy has stayed very consistent. I mean, this is what we said were our strategic pillars a few years ago.
And when I look back on it, I say actually these are exactly the things that we invested in, exactly the things that we were focused on, and I'll talk a little bit about how things have evolved over the last few years. Certainly, compute technology leadership, that's our foundation. We have to have the compute capability. But we said that data center was going to be the most strategic market for AMD. This is where we saw tremendous growth. This is actually the part of the industry that fits our capabilities the best, and we've really expanded our data center business. Ensuring that AI goes across every aspect of our product portfolio was also a key pillar and ensuring that we had the software. Many of you guys have asked us about our software and what we're doing around the software, so we'll talk more about that.
And then also using the entire power of our IP portfolio to offer both standard as well as semi-custom solutions. And these are areas where we think over the long term, there are lots and lots of needs for compute in the industry, and you want to have sort of the right compute for the right workload, and so you need sort of the whole tool chest to really go through that. And so just taking a look at some of the progress over the past couple of years, I think our data center execution has been extremely strong. I mean, we've grown this business from sort of $2 billion in 2020 to this year estimated over $16 billion, over 50% growth rate.
And when you look at sort of the underpinnings of that, I think there's no question that when we started with the Zen roadmap family and EPYC, I think people were wondering, hey, just how much traction are you going to get, Lisa? Is this a place where you have durable, long-term competitive advantage? I think we've taken what was a very, very clear technology bet and really understood where the market is going. As a result, you see EPYC now in just about everywhere, so 10 out of the 10 largest hyperscalers, every one of the most important social media services. We're actually in over 60% of the Fortune 100 today and growing. That's been a really strong success story.
And then we've also had our Instinct and our GPU product line, which with MI300, MI325, and now the MI350 series, we've ramped this faster than any other product in AMD history with seven out of the top 10 AI companies now using AMD Instinct products, the top two fastest supercomputers, and many strategic partnerships that have been announced in the works. So looking at some of the numbers, what has that meant to our business? We are now, let's call it roughly about 40% revenue share of the market in the most attractive segments of the market, so large hyperscaler, very much standardizing AMD technology through our five generations in addition to growing enterprise footprint. What that's meant is data centers now are our largest segment, so almost half of our business, 47% of our business to be approximate there. And it's growing.
It's growing much faster than the other pieces, one because the TAM is growing and the other because we are extremely well positioned from a product portfolio standpoint. On the AI side, we're going to talk a lot about AI today. We're going to talk a lot about data center AI, but I just wanted to put the picture in terms of what's been happening in the AI business for AMD. Clearly, the rate and pace of innovation in AI is faster than anything that we've seen before. We've compressed our schedules so that we went from what was a two-year cycle to an annual cadence.
I think we've done well with adoption of MI300, MI325, and MI355, but there is a huge step function in front of us with the MI450 series, and that's both in terms of technology, just sheer raw capacity, capability, as well as going to our rack scale solutions, as well as, frankly, where the market is. So it's a confluence of sort of technology at the right time for a market that is inflecting in a big way as we're going forward. Just a little bit of an overview on the client and gaming business. You're going to meet Jack Huynh today. Some of you may not have met him before. He's our new leader in the client and gaming segment. I think we've done a fantastic job with this business. People have often asked us, what do we think of the PC market?
I mean, the PC market is a great market. It's a market where a lot of silicon is consumed. This is how people really touch compute. What was most important for us in this market is that we picked the segments where technology mattered, and we focused on those segments and ensured that we had product leadership, so when you think about premium, when you think about gaming, when you think about enterprise, those are the most attractive portions of the client market, and that's where we have done exceptionally well, particularly over the last 12- 18 months, and so when you put all that together, we are currently at 28% client revenue share, but we're really inflecting in this business as well, over 50% of the desktop CPU channel, and we're at a place where AI is really in its infancy in the PC market.
And so as that scales, I think our product portfolio strengthens, and we see this as a growth market for us as well. And then you're going to hear from Salil Raje on the embedded business. And the embedded business is really primarily the former Xilinx business together with the embedded x86 business. And what you see in this business is, put aside there's been some, let's call it noise over the past couple of years as there's been a bit of inventory correction overall, what we see is an extremely strong business that has access to a set of markets that AMD never had access to before. So over 7,000 customers, automotive, industrial, aerospace and defense, communications. We are in all of the largest firms from a technology standpoint.
And perhaps in this business, the most important metric is not near-term revenue because the near-term revenue is very much a matter of things that happened in the past, but the most important thing is design win growth. And what you're going to hear from Salil is since the acquisition of Xilinx and the integration into AMD, we have grown design wins at a significantly faster pace than sort of the former Xilinx from an FPGA only portfolio. Our design wins have exceeded $14 billion in 2024. We're on a path to exceed $16 billion this year, so over $50 billion of new design wins at a CAGR of 21%. And that gives us great confidence of the long-term revenue trajectory. And Salil will talk about some of the ways where bringing the portfolio together has really ended up with tremendous synergies.
And then one area that we haven't talked a lot about with this audience, but I thought this would be a good time to bring out is where we are with semi-custom design wins. If you think about my philosophy on semi-custom has always been you need to have the right product for the right workload for customers. And when you get into very high volume or very specific workloads, there are advantages to taking a standard product and really customizing it for those specific workloads. What has been well known for AMD is we've been number one in game consoles for a long time. We're very thankful of the relationships that we have with the gaming customers. And that was, let's call it about $20 billion of lifetime revenue, give or take, when you think about a five or seven-year cycle of that business.
What we have done, though, over the last 12- 18 months is really expand the aperture of what we do in terms of semi-custom. And what that has really done is it's taken. Certainly, we're very pleased with this traditional gaming business that we've been a part of, but we've closed new design wins in aerospace and defense, automotive, data center, as well as communications, now totaling over $45 billion of one design win content that would be revenue starting ramp in 2026 and beyond. So I think that tells you the power of the model. We'll talk a little bit about the technology underlying that and how we've been able to do that with some of the technology investments that we've made.
This is an example of where I think our strategy, our long-term strategy in investing in a modular IP roadmap and the ability to mix and match the different pieces of our IP portfolio has really paid off. I think this will be an area of consistent growth for us going forward. Putting all that together, the other piece of it that I think we have been very focused on is strategic investments. This is an area where we always have to bet ahead of the curve. We have to make strategic bets. There's no question, by and large, the largest bet that we've made is really in AI. If you look at just our organic investments, we've significantly ramped the engineering talent in the company. We have over 25,000 engineers now, both hardware and software.
The largest growth in engineers has been in software and platform, which you might expect given where we're going with AI systems. We've also built what I would say is an M&A machine. We are now very confident and comfortable with M&A execution. I know that when we first started this, Xilinx was the first large acquisition that we did. People were always like, "Lisa, are you sure you can integrate this and add value and not have dyssynergies?" I think what we've shown is excellent execution on the large ones. Xilinx has added, in addition to the revenue in the business that we're going to talk about, it's added significant technology capability.
When you think about the AI engines, when you think about the CertiS capability, when you think about the software capability, and our leadership of our AI is under Vamsi Boppana right now, who is a very key executive in that. We also have added with Pensando significant networking expertise that is a key part of our AI capability. And with ZT Systems, we've added all of the rack-scale solutions expertise that has from day one started adding value and accruing value to our MI450 and Helios capabilities. In addition to that, we've done a number of software acquisitions. Many of these are smaller acquisitions, but what we've done here is really in the essence of trying to ensure that we have as much and as fast a ramp as possible on our ROCm software stack.
We have found some excellent, excellent teams that have added to Vamsi's team to ensure that we have best in class from a software standpoint. The other arm that we have started with is a much more active venture investments arm. So I would say this was something that we didn't do as much a few years ago, but now this is an active part of our portfolio. Matt Hein runs it for us. And here what we're looking for is really, I wouldn't say picking winners and losers because it's hard to pick winners and losers when you're investing in startups, but it's picking leaders who we think will be influential in the AI ecosystem, whether it's hardware, software, application level, model level, to ensure that we are keeping a good pulse on what's happening in the ecosystem and also ensuring that we familiarize people with AMD content.
So these are the key strategic investments that we've made, which I think have really added to our capabilities in AI. So putting that all together, I think the overall revenue growth story has been a strong story. We're now projecting to be about $34 billion this year from a much smaller number in previous years, and that's also significantly improved our overall profitability. So that kind of gives you an idea of how we got to today. Now let's talk about where we're going because I think that is, clearly, there's so much that's going on in the industry. We want to give you a sense for where we think our place in the industry is and also where the market is really going.
So starting first with some of the market trends, I'll spend a little bit of time in terms of the market trends because I know that you guys are all thinking about these things. I would say that the rate and pace of change in AI is certainly beyond anything that I've seen in my sort of tech history, a super intense speed and pace that you're seeing just massive infrastructure build out. And what do I mean by that? I mean, when we talk to all of our largest customers, we've actually seen this because we're always talking to our customers. And if I asked them last year, they might have said, "Hey, Lisa, we're investing right now, but we believe it'll level off as the compute comes on board." And if you ask them today, what they would say is, "No, it's actually not going to level off.
We need to accelerate. We need to put more AI infrastructure in place because there's a real belief that AI compute really equates to intelligence. So if you have the chance, if you have the balance sheet, if you have the capability to put on more compute, you're going to do it because it's going to give you incremental advantage versus your competition. So it's really, I would say, an insatiable demand for AI compute, which is kind of interesting. There's obviously lots of work on power and ensuring that you get the full power and data center build out ready for that. We're also seeing that it is really too early to talk about the winners and losers. There are lots and lots of models out there. I think the number of models is growing.
In addition to the foundational model companies, which most people spend time on, there are lots of the next layer of companies that are helping enterprises really use AI in terms of fine-tuning and other services that you add to it. And I think that's going to become more important. And then probably one of the trends that you guys have asked us about from time to time is what happens to the CPU market when you have AI accelerators growing so fast. And I would say up until this year, there was a little bit of a thesis that somehow GPUs would take a lot of the CPU workloads. I think what we have seen is that is not true. We've actually seen the opposite be true.
As we've gone through, as more AI has deployed from a, let's call it the models are out there, as people are using AI in enterprises, we're actually seeing that the CPU demand is accelerating, and Dan McNamara is going to talk a lot about why we think that's happening, what we see, and it's now a consistent trend that we're seeing with multiple large customers as well as both on the cloud side as well as the enterprise side, so I think it's a great thing. I think it makes sense, right? AI is driving new intelligence, new capability. As you have more agents out there, those agents have to work on something, and they're working on things that require more general-purpose compute. We are seeing inferencing growing faster than training. I think that's something that we all kind of expected, but we're starting to see that now.
The enterprise-class solutions are also starting, I would still say, relatively early in the adoption curve, but there's a lot of interest and experimentation there. And then clearly, sovereign AI and nation-scale deployments is an area where we continue to see every country wants to have control of their own AI compute, and they want to be able to select how they deploy that. And this is one of the key trends that's also accelerating the TAM growth overall. And then as you go beyond the cloud, I think we will see more AI, whether you're talking about at the edge or in PCs, as this is a way where new experiences are coming in. This is also still early in the cycle, so I would say this would be in the latter part of the growth for our long-range plan.
It is certainly an area that we think is interesting. So when you put all that together and we talk about what market sizes are, it's always hard to call what a TAM is. When we first started talking about the AI TAM, I think we started at $300 billion, and then we updated it to $400 billion and then $500 billion. And I think many of you said, "Well, that seems too high, Lisa. Why would you think that those numbers should be so high?" It turns out that we were probably closer to right than wrong in terms of just the acceleration of AI spend. And that really came from, I would say, lots of discussion with lots of customers in terms of how they saw their computing needs frame out. So when we look at the market now, what we decided to do is a couple of things.
One is extend our range from really over the next five years, so 2030 TAM, and also think about it as a more inclusive TAM to not only include accelerators, which is very important, but also include CPUs and some of the networking content that we add to. So our current perspective is we're talking about a TAM that is greater than $1 trillion by 2030. We've seen if I look at our previous forecast at greater than $500 billion for 2028, we're seeing that number come up significantly. And then we expect that there will be extensions of that growth rate going into 29 and 30. So it's an exciting market. There's no question. Data center is the largest growth opportunity out there and one that AMD is very, very well positioned for. So that kind of sets the stage of where we're choosing to invest going forward.
So now in terms of our strategic pillars, similar types of things. I mean, what I really want to emphasize is our strategy has been very, very consistent. And I think you need to be when you're in the technology space because, frankly, these product cycles are long. These are our strategic pillars, and this is how we're going to organize the day for you. We start with just compute technology leadership. That's foundational to everything that we do. We are extremely focused on data center leadership, and that data center leadership includes silicon. It includes software. It includes the rack-scale solutions that go along with that. And then part of our software story is not just that we will have a very competitive software stack. It's also that we have a software stack that extends across all platforms from cloud to edge to client.
And so we'll talk a little bit about that. And then powering AI everywhere outside of the cloud. So if I just double-click on each one of those to kind of give you the highlights of perhaps what we would like you to spend some time on. First, on the technology leadership, Mark Papermaster is going to take you through this. Mark's been my partner over the last 10 years, really developing sort of the entire technology stack that we have at AMD. I think we're very proud of the fact that we made some good decisions along the way. Clearly, our investments in the foundational CPU and GPU IP were critical, but that's really expanded. I think the fact of the matter is these systems are getting so complex these days. I mean, our chiplets now are mixing and matching all of the latest and greatest technologies.
It's extremely important to have the right interconnect. I think our bet on Infinity Fabric allowed us to do that. I think we're leading the industry in terms of advanced packaging, and we're making significant investments in the SerDes and other technologies that are necessary to interconnect all of these things. So Mark will take you through that. But the key is our absolute commitment is ensuring that we invest for technology leadership across the board. And I think the team has done a great job with that. In terms of the data center, as Matt said, we're going to spend a good amount of today on this, about an hour, and that's going to be gone through with Forrest, Vamsi and Dan. This is really an all-company affair.
So if you want to talk about something that is front and center for everyone at AMD, it's ensuring that we accelerate our data center leadership. I think the thought here is really across all of the major segments, cloud, enterprise, and supercomputing, ensuring that we have the right compute for the right workload. I am a big believer in there is no one solution that scales across every market segment. Every segment has kind of a uniqueness that it's trying to drive with its workloads. And that's across CPUs and GPUs and accelerators as well as across sort of high-end, mid-range, and sort of entry-level. And so I think we've built an expansive portfolio to do that, and we'll talk more about that.
I think the other piece of our data center strategy that is unique is our intention is to have an extremely capable, full rack scale solution, but it's also to do that in an open ecosystem. And in an open ecosystem means that our customers can decide whether they use all AMD, which we're very happy if they choose to use all AMD. We're also okay if they want to interoperate with other standards out there because it's good for their business because they have other things that they're trying to optimize. And so that continues to be a key foundation, and Forrest will certainly go through that. On the software side, I think we've made tremendous progress in this area. This has been one of the areas where people have said that we have needed to make significant investments, and I think we've done that.
I think ROCm is today the industry's premier open-source AI stack. What we've done is really get significantly more focused on ensuring that we have native support for all of the DayZero models, native support for all of the leading open-source frameworks out there, and that has really been something that Vamsi and his team have worked on, and also ensuring that we have the full stack of enterprise solutions and developer solutions out there. I will say that we have greatly expanded from just serving the largest hyperscalers to really serving the broad developer base with ROCm, so this is an area that Vamsi will go through in quite some detail, and what that says is we have now all of the pieces to deliver full AI factories, and that is really our goal throughout this entire stack across CPUs, GPUs, software, networking, and our cluster-level systems design.
So in terms of powering AI everywhere, this will be covered by Lisa and Jack. AI PCs, gaming, edge AI, physical AI, these are all places where we think our IP also plays very well. And we will continue to ensure that we have a full ecosystem that covers all of these capabilities. So that gives you a little bit on the technology strategy in terms of what this means for our business. So I'm going to let Jean go through the full financial model this afternoon, so you have to kind of give us a few hours to go through all the technology. That's okay, right? I mean, we want to make sure that you stay interested and engaged.
What I thought I would do is maybe give you a few nuggets of how we think about growth so you can have that in mind as we're going through each of the presentations. So first, in terms of strategic partnerships, I think what has been really critical to us is to ensure that to be successful in the AI industry, I think deep partnerships are super important because these systems are so complex and because these roadmaps are multi-generational. And I think that's really what we've done with our strategic partnerships. We've recently announced several partnerships. I'd like to give you maybe a little bit of context of how those fit in and then how those fit into what we think our growth trajectory is. First, OpenAI, I think we were very excited to announce a six-gigawatt partnership with OpenAI.
They are clearly the leader in frontier model development. We had been working with them for some time, I would say, at the early stages. But the key is I think we actually took the opportunity to develop MI450 series for many of the use cases that were most important to OpenAI, including optimized inference use cases. And so to be able to announce a 6-gigawatt deal over five years, I think that sets a real foundation for our roadmap and for our overall adoption curve. We also announced an expansion of our Oracle partnership to MI450. So Oracle will be one of the first, if not the first, to offer public instances of MI450. And what we're really trying to do here is our experiences when we finish development to when you actually see a public instance can actually take some time because these are very, very complex systems.
Our goal really with these partnerships is to get it to the place where when we finish development, you can actually go and get an MI450 instance in the cloud very shortly. And so that partnership is really targeting third quarter in 2026 as the ramp of that capability. We also talked about and showed for the first time our Helios rack scale solution with our partnership with Meta. And I'll mention a little bit of context of why this has been really critical. So first of all, Meta has been an excellent partner for us across MI300, MI350. They've been very involved in the software work, both on the PyTorch ecosystem as well as just the overall ROCm ecosystem. When we really went to rack scale solutions, we had two choices.
Our choice was either we develop it on our own and have all kinds of proprietary system design, and we certainly could have done that. But in our true partnership mode, we said what the industry really wants is more of an open rack architecture because the idea that, yes, you can use AMD MI450s is great, but being able to ensure that there are a set of open standards so that the ecosystem can develop more holistically was important. And so I think the Helios design based on the open standards with Meta really enables that. And so I think we're very pleased with that partnership. And then we also announced a couple of new supercomputers with the Department of Energy that are adopting the MI400 series capabilities, actually a new product that we put on the roadmap, the MI430, which Vamsi will talk a little bit more about.
So putting all of these in place, I would say, is a good foundation to talk about what we think our AI growth models are. What's been clear is that there's a lot of excitement around MI450 series and Helios. In addition to some of the customer engagements that we just talked about, we have strong momentum with multiple additional gigawatt scale opportunities. And those are very, very much in deep design phase with these customers. And I think what customers are telling us is, look, there's a real need for more compute. And what we have with the MI450 and with Helios is extremely competitive in many cases, leadership, when you think about the memory capability as well as the scale-up bandwidth. And we've also co-developed with these. And so we have multiple hyperscaler AI natives and sovereign opportunities that are well underway.
We talked about a revenue target of, let's call it, tens of billions of dollars in 2027. So I can say that we're on track to that. But we also see a very clear path to double-digit share in this very important data center AI market. And that translates into, let's call it, a growth rate of over 80% AI revenue CAGR over the next three to five years. So when we talk about financial model today, we're going to frame it over the next three to five years. And so this is kind of what we see as our potential, given the customer traction, both with the announced customers as well as customers that are currently working very closely with us.
Now, as exciting as the data center AI revenue opportunity is for us, the other message that we want to leave you with today is every other part of our business is firing on all cylinders. And that's actually a very nice place to be. Some of the other key targets that we're going to talk about today are areas where we think we have the ability to grow significantly ahead of the market dynamics. So on the server side, Dan McNamara is going to show you sitting currently at approximately 40% share. We have a clear path to over 50% revenue share of the server market over a larger TAM over the next three to five years. Jack will talk about our client revenue market share. We're sitting, let's call it, in the high 20s right now.
Again, a clear path to over 40% client revenue market share in this three- to five-year time frame. And then Salil will talk about what we see in the embedded market. Again, the opportunity to grow significantly ahead of the market to over 70% of the embedded adaptive revenue market share over the next three- to five years. So I think all of these are really exciting targets. I think each of the businesses will really go through their strategy, their product roadmaps, and why they see a clear line of sight to this. At the AMD level, I want to give you some ideas of how we think about revenue growth at the AMD level. So at the AMD level, what we would say is we expect to really inflect in overall revenue growth over the next three- to five years.
We've done very well over the last few years, growing over 20%, but we really see the opportunity to now take the overall company at a scale of, let's call it a baseline in 2025 of $34 billion to grow at over 35% CAGR at the total company level. And that's broken up into the data center business growing well ahead of the market at over 60% CAGR, and the core businesses, that's client and embedded, growing also well ahead of the market at over 10% CAGR. So that kind of gives you a flavor of the type of growth that we're talking about. It's an exciting growth trajectory that we have. I'm going to leave the rest of the financial model to Jean, so she'll have a bunch to talk about as we wrap up the day towards the end.
But that just gives you kind of a flavor of why we're so excited about this time in the industry and this time for AMD. I think we have just all the confluence of events that you would want to have if you're in tech today, right? We have a large and growing TAM, over a $1 trillion TAM at the data center level, and that adds at the company level. I think we have a fantastic product portfolio. I think as a team, I think hopefully you will have seen that we are good at execution. When we say that we're going to do something, we're going to do it. We have great foundation with deep strategic partnerships, and the data center has tremendously expanded, and we have expanded our capabilities as well.
So with that, let me turn it over to the team to give you all the details. So Mark, do you want to join me, please?
Thank you, Lisa. I really appreciate you all being here today and the opportunity I have to share with you our technology portfolio. I've been in the industry for decades, and I've never seen such a time of just incredible pace, of transition, of new technologies, and frankly, technology disruption. Guess what? We're a technology company at our heart. Myself, the engineers, the technologists in the company, this is the DNA that we have. This is what we were built for, to take on these challenges. Let's start today with taking stock of the AMD portfolio. Through focused R&D and inorganic acquisition, we've amassed the broadest portfolio of computation in the industry, and it's the computation that's required for the AI era.
We've invested deeply across our leadership core IPs, our CPU with our Zen family of x86, our GPUs with CDNA for the data center with RDNA and gaming and edge applications. We have optimized for low-power inferencing with AI engines and our neural processing units, and we've extended our reach into networking with Pensando DPUs and, of course, with the acquisition of Xilinx, a deep FPGA and adaptive compute portfolio. What ties it all together? Well, that is our common foundational IP, a huge investment to make sure that we can seamlessly deploy these IPs across our portfolio, our Infinity Fabric, providing that glue across all of the building blocks.
Our system on a chip capability honed over years of development of leading-edge integration, physical design, verification, the kind of skills that you amass to optimize chip designs, and then the leadership of chiplet technologies, partitioning into chiplets and leadership packaging capabilities. These have enabled us to quickly adapt to our customer requirements to integrate these solutions. And on top of that, we have software that ties us all together for AI applications, foundational across our portfolio with the ROCm stack. And also very, very important and even more important in this AI era is security. And we have security building blocks, consistent security building blocks that we put in across our portfolio. So when you put all of that together, by the way, all that's IP is based on a very strong patent portfolio. We have over 13,000 patents issued, almost 18,000 patents in process.
Very, very deep investment backed by its IP portfolio. But that's what creates our product pillars. That integration capability allows us to attack data center, our client and gaming, and our embedded markets. And as Lisa said, we've expanded the aperture of semi-custom beyond that gaming centerpiece that we had and now extended it, as you'll hear more from Salil Raje later for both data center and embedded applications. Well, some of you will recall this slide. I actually showed it at our AMD Financial Analyst Day a decade ago. And what it talked about was really the bets that we were making, bold bets that we would set the future of the company on. And as Lisa said, these were tough decisions that we made on the long arc of development. So we bet on a new x86 CPU architecture, Zen, to bring competition back to the CPU market.
We called out the first stack and 2.5D, the lateral connection of chips, silicon on silicon, to provide more performant GPUs in the industry, and then our Infinity Fabric to give us a modular architecture to punch above our weight for that R&D investment that we were making and to pave the way for the partitioning of chiplets and our roadmap going forward. These were far-reaching claims at the time, but we put our heads down and we've executed, and so that's really a hallmark of AMD. It is our execution. We've delivered five generations of the Zen CPU. We split it now into high-performance versions, power and compact, cloud-optimized version, also used in our networks, but all maintaining consistency of instruction set architecture. We went where no company was willing to go in the bet we made with chiplets when you look at our GPU roadmap.
We invested not only in the 2.5D that I had mentioned at that FAD, but 3.5D, both that lateral connectivity as well as 3D stacking to create an incredible advantage for our customers because with that density of computation comes efficiency, and with that efficiency comes total cost of ownership advantage for our end customers, and this is what's driven a key element in driving our GPU market adoption. It's also given us the ability to really reuse IP across the company and an agility for sometimes very late-binding decisions we make when customers, as they want to do, particularly our largest customers, needing us to tailor to their needs. Where are we today? We're simply accelerating. In this industry, you can't slow down.
And it is our roadmap that is driving us to continue to innovate from rack scale for the largest of hyperscaler and sovereign AI implementations to, of course, our PC and embedded applications. Our competitor is developing an AI era more proprietary and walled garden solutions, but we are leaning in going forward on commitment to open software and open hardware AI ecosystems. And Lisa talked about this incredible pace of scale, of expansion of computing to meet the demanding cluster needs, the growth of incredible computing that we need. And so that's where we're leaning in as well. We've got the IO technology that we brought in-house through ZT's investments in our networking capability, as well as the technology that we've done to UALink as we bring these clusters together. And even more than in prior applications, security is critical.
AI needs to know that your data is entrusted, the customer's data is entrusted, and we've extended confidential computing to AI clusters. Let's dive down a little bit, and we'll start with CPU roadmap. Zen really reshaped what CPUs were delivering in the industry. If you recall, the industry had plateaued, innovation had died off, and Zen brought back significant performance improvements every generation, over double-digit percentage of performance and customer value that we delivered. This came with more core density, a higher efficiency, and economics that we brought, leveraging our approach. You look at Zen 5, we re-pipelined for yet more performance with every core that we added. We have also added AI support.
So people don't realize that vector engine in Zen 5. It's a native 512-bit wide vector engine, along with the software support that you needed, is an outstanding inference engine for a number of applications that are not the most demanding that require GPUs. Our next generation of Zen is Zen 6. Zen 6 is in our next-gen EPYC that you'll hear more about from Dan later in the agenda. It was the first tape-out in TSMC 2 nanometer, and it's going to further extend our leadership across our CPU, both performance and efficiency. And I talked about that consistency of instruction set architecture.
We're now over a year in having successfully stood up with Intel an advisory group that drives alignment of the x86 software or instruction set ecosystem, protecting those billions and billions of lines of code that customers have out there with x86, safeguarding their investment. Let's talk about AMD Instinct, our data center GPU. Since MI300 was launched in late 2023, that marked a shift to an annual cadence of development. So with that, that meant CDNA, our GPU building block for the data center, also went to an annual cadence, driving generational improvements in compute as well as memory performance. CDNA 4-based MI355 is ramping incredibly fast right now with expanded AI math formats, including block scale for accuracy preservation. Our CDNA 5 is what makes up MI400 series, launching next year. Also, expanded math formats for more efficiency and HBM4, bringing leadership memory capability.
Just like the CPU, which remains on an alternating design teams, everything through Zen 6, even beyond Zen 6, Zen 7 in design, we do the same thing with our GPU. And so we have MI500, our generation next next, well in the design phase, driving yet more compute, memory, and interconnect technology. And what has happened with this annual cadence that we have, it's a requirement just given the pace that we see of that explosive and insatiable demand of AI computing. If you look at this curve, it's showing flop rate. So that's the computation rate that we have in our GPUs. And prior to 2025, you can see that we were on a doubling pace of those flops every two years. After 2025, it's doubled. We're on a 2x every single year of that capability.
And that's with a baseline of 16-bit math, one of the more predominant math formats. But again, optimized math formats yet again increase your performance. And so what you see is on top of that baseline at 2x every year, FP8, which is 8-bit math, provides a doubling again of that baseline. And FP4, 4-bit optimized math, doubles yet again. This is a compounding exponential growth rate of computing that our industry has not seen before. So I mentioned earlier SerDes, and it's really the first time that I've done a deep dive on SerDes, and it tells you about the expansion capability and connectivity that's needed of our industry. So SerDes are front and center. We saw this coming. So what we do is start to look at the standard I/O, PCIe.
Back several years ago in 2022, we in-housed the design of that PCIe Gen 5 because we needed more performance, more optimized implementation, because when you look at the math of how that's deployed across our portfolio. In fact, that implementation has actually already shipped tens of millions into the industry. We got further scale because upon the acquisition of Xilinx several years ago, we immediately brought the SerDes teams together, and that brought 112 Gbps interconnect technology that was used in networking right into the AMD portfolio. And so that's, of course, shipping already today in the Versal FPGA, but it's also the SerDes underneath our PCIe Gen 6, which powers our next-generation server, codename Venice. And now you look at where we're going in the MI400 series and that Helios rack that Lisa mentioned. We double to 224 Gbps technology.
And that's critical for our scale-up capabilities, and I'll expand on that in just a minute. Where's the industry going? It's not slowing down. Copper is going to 448 Gbps . But in that signaling technology, when you look at the number of wires you need for the bandwidth we need and the power dissipation, it's going to start the transition at rack level to optics. And so we're preparing both technologies in parallel, both 448 Gbps copper as well as investments and roadmap that we have for optical. So we will manage that transition. And then I always love talking about the Infinity Fabric. I've often called it the hidden gem that we have.
You look at what we're doing. We've continued to innovate, and that's what gives us the scale that we have to be able to power these leading-edge, number one, number two supercomputers in the world. The growth that we've had in hyperscale applications across both inferencing and training relies very, very heavily on the ability to seamlessly add cores and have that performance scale and then interconnect from chips to socket to rack and, in fact, even to scale out as we build out the clusters. When you look at what we did with our third generation, it powered our first leading supercomputer, the Frontier system. Now it's number two in the world because the fourth generation expanded its capabilities to power Frontier. Frontier is currently the number one supercomputer in the world, but it also showed that versatility of the Infinity Fabric.
That baseline of the MI300 that you see in that Frontier supercomputer was swapping out chiplets, leveraging the versatility of SIP. You have then the MI300X, which powered our AI system development, and the MI300A, and then the MI300X, which is a CPU-only version, which is powering Microsoft Azure's highest bandwidth CPU that they have in their fleet, so CPU-only version, CPU-integrated GPU version, and a GPU-only version, all with the versatility that we provide with our fourth gen and beyond Infinity Fabric, and that leads us to fifth gen that we're introducing today. It's poised to deliver for the MI400 series next year with the Helios rack. It's optimized for best-in-class efficiency across chiplets and sockets, delivering the memory bandwidth that we need to support that advanced system, and it's engineered to support open standards.
And of course, like our other IPs, we're well at work on the generation beyond fifth. And now I want to talk about chiplet and packaging, which has been a historic strength for us. We've been a longtime leader in this area, delivering differentiated solutions that have allowed us to push forward beyond the effects of a slowing Moore's Law. We're now in our nth generation, but you have to remember the beginnings. When we started with our first-generation EPYC, we implemented, as you see on the far left side of the slide, with chiplets, four distinct chiplets that we brought together. And it started us on a path of really leveraging that partitioning capability. You look at later in the server roadmap, we changed that partitioning to be separate compute dies and I/O dies, and we started stacking 3D V-Cache. It gives us an astounding performance capability.
In fact, we're in the fourth generation of that stacked SRAM, that stacked V-Cache, still without a competitor in the industry to that capability. On the GPU side, this packaging, this chiplet and packaging capability has been critical for us. It's what delivered for us with the 3.5D that I described a moment ago with MI300, that flexibility that I described and an incredible density giving us the TCO advantage in our 300 and MI355 roadmap. But we take it even beyond that for MI400 series next year. We actually have a doubling of the packaging interconnect density. And so that gives us an incredible silicon packing capability in the next generation.
And so when you go back to that start of the first EPYC chiplet implementation to what we use for the Helios system launching next year, it's a 30x increase in the silicon available for compute and memory on that design. Just a significant impact that we've had with our roadmap and our implementations. Well, look, I want to show an example. All of this technology is nothing without its ability to come seamlessly together in a solution, and Helios is that best example. It's proud to show off the capabilities that it has as it brings together our next-gen CPU, our next-gen GPU, our next-gen interconnect technology, our next-gen SerDes. All of those technologies I just described come together to give us the link technology for the compute and link technology that we need for leadership capabilities.
It also highlights the co-optimization that you need when you're at an era of rack scale. ZT Systems, the design team of ZT Systems, was a huge addition for us in optimizing across all of those elements for a performance per watt across those compute elements, right through all of that interconnect for leading-edge efficiency. It goes beyond that. It requires software co-optimization as well. The ROCm facets of the communication libraries and the optimization of the workloads also are co-optimized at rack level. I'd love to pull out a compute tray and just give you a visual of how this all comes together. What you're seeing is our foundation IP as well as the Glue IP coming together here to have an incredibly dense sled, our compute tray that we build on to create that Helios capability. You're seeing incredibly tightly coupled GPUs.
You're seeing those connected leveraging our Infinity Fabric at 112 gigabit per second. You're seeing our ability to use standards. So it's a UA Link, which is connecting those GPUs to our AI NIC. And it's UA Link packetized and run over Ethernet that's giving us scale-up capability in the Helios system. And there's one thing that you don't see as you see the physical elements here, but I described it earlier. And that is those security blocks that I mentioned earlier are, of course, embedded in all of that elements. And why is that important? It's that we're now delivering security and confidential computing to AI cluster level. And why is this important? Well, it becomes so important right now because what does AI do?
As you are running your company and you're building models, you're training your data, all your data that you spent decades building up and creating weights, they become your crown jewels for your company, and it has to be safeguards. What this capability does is it provides encryption. It's an end-to-end encryption. It's encrypting data that's at rest, data in motion. The keys are owned by the user. You get it from a key server. It is ensuring that any visibility and that lateral traversal of the data, all you're seeing is encrypted information. This is enabled with the 5th Gen EPYC that's already shipping today. It's a standard called T-DISP or Trusted I/O.
And you'll see in the AMD portfolio next year and with other industry partners, we now extend this to heterogeneous clusters, CPU, GPUs, storage devices, networking devices, all protected with this important capability. Well, now I want to shift gears a little bit after going through a set of IPs and the example I showed you for data center and cluster applications. And I want to talk about our gaming GPU roadmap. We've been on a continuous journey for driving graphics capability and now AI capabilities with our Radeon and Ryzen roadmaps. And it is powered by the RDNA IP. RDNA 2 had brought a ray tracing optimization to bear, but in the next generation, in RDNA 3, Tensor engines were added. So it was the introduction of AI to even further improve upscaling capabilities, giving you more lifelike images.
With CDNA 4 shipping today, we have even further usage of AI, and it's doing a couple of things. One, incredible upscaling so you don't have to look beyond FSR 4. If you're a gamer, some of you are gamers, I'm sure if you run FSR 4, you see an incredible gaming experience while upscaling to beautiful visualization, and it's not just about improved gaming experience going forward in this roadmap, and you'll hear more about it from Jack going forward, but as we further improve all of that machine learning for our ray tracing capability, it also enables these devices for edge AI applications, and then, of course, our AI engines, our neural processing units. We announced the first dedicated NPU accelerator in CES 2023, and it began shipping later that quarter.
Our second generation of NPU was focused on performance per watt improvement, more TOPS, of course, at each generation, but it really unlocked the capabilities in Windows Copilot Plus. And so these are all powering our AI PC shipping today. And then our third generation NPU, well in design, is going to be another significant improvement in both TOPS processing capability, but even more energy efficiency. And that's not only important in an AI PC that you have in front of you, the battery life you gain, but vitally important for the edge AI applications, which you'll see growing in application with this technology. And as we wrap the IPs that really bring our solutions together, we've saved the best for last because when you think about AI, it's software, and it's our ROCm stack that is the foundation of giving the experience and the ease of deploying on AMD.
And our ROCm stack is now fully battle-tested. It's been battle-tested in high-performance computing in those top supercomputers. It's now been battle-tested by the most demanding hyperscaler customers. But we've also expanded. We've expanded. If you look at the last year, our adoption in enterprise is going up, and therefore the ROCm support for enterprise as well support for the open community and bringing the additional power of developers across the world behind the ROCm subsystem. So when you look at this capability, what you see is that we now have ROCm as that common foundational IP tying together the AI experience from data center and those most demanding applications to gaming applications and edge AI that you can run on Ryzen and Radeon and right into those embedded APUs that we have as a growing part of Su's business. Tremendous progress on our ROCm AI stack.
Look, I want to wrap up with one key message to you is that as we've implemented all these technologies and partnered so deeply with our customers in driving the market share gains that we've had, it's done something else because we've become a trusted partner with our customers, and we hear clearly from their standpoint where the market's going. Data center explosion, you know that that's amongst our largest opportunities right now, and we're addressing that with the rack scale investments from GPU scale that we've made. Agentic AI. Agentic AI and the processes that kicks off, it turns out, are driving much higher CPU workloads, whether it be retrieval augmented generation, the test phase of a research AI application you're running, or simply agentic agents kicking off many traditional scalar workloads on GPU. We're seeing this as a major, major growth factor.
And likewise, the edge AI explosion is starting now, and we are poised with our integrated CPU, GPU, NPUs to take advantage. So those three trends are upon us now. They're going very rapidly, but we're also super excited. I'm personally very excited on agentic AI. We start already with robotics today, but it's going to be a vastly growing market in quantum computing. Our FPGAs are used today predominantly to implement leading-edge error correction. Our FPGAs are also controlling the multi-qubit quantum of many, if not most, of the quantum developers. But what actually gets me most excited is what we're doing to bring CPU and GPU classical computing and optimize it with the quantum computer as an accelerator. And super pleased with the work we've started with IBM and the partnership we've announced, and we'll expand that with other quantum providers over time.
Look, I just want to wrap by telling you that AMD is absolutely fueling with innovation that insatiable demand of computing for the AI era. Our roadmaps are designed to consistently provide product leadership and to leverage that broadest of IP portfolios. The open software and hardware ecosystem that we have, giving both diversity of solutions and choice to our customers, and lastly, our relentless execution. We are collaborating with our customers and ecosystem partners alike, to provide differentiated solutions and leadership products. Thank you very much.
And with that, I'm thrilled to bring up my close partner, Forrest Norrod, our data center leader.
Thank you very much. It's a pleasure to be here. Thank you all for coming. I started at AMD 11 years ago, shortly after Lisa became CEO.
The mission that she set for us was to re-enter the data center market, not to participate, but to be a leader. At that time, we were facing a dominant competitor that had nearly 100% of the market. So we knew that in order to make headway to fulfill this mission, that we would have to build a strategy that incorporated a leadership product roadmap, a long-term strategy to rebuild and build customer trust, and a product and solution set that was truly differentiated to induce people to give AMD consideration and then to make that switch. Now, we implemented that roadmap based on our understanding of where the data center industry was at the time and what would happen to it over the next decade. We knew that virtualization and cloud would continue to grow. That was clear.
We also saw that the explosion of data was going to continue unabated, that the conversation would shift from terabytes to petabytes to zettabytes of data flowing around the world's IT systems. And we also saw at the time the beginning of heterogeneous compute, the beginning of the era of the CPU and the GPU collaborating together on HPC and early AI applications. And so we built a roadmap that differentiated in areas that were relevant to those key trends: higher core densities, leveraging our chiplet technology, higher memory and IO bandwidth, more flexibility in both, all things that were directly relevant to where the industry was going.
And so we built a multi-generational CPU roadmap, and Dan will talk a little bit more about this as well, that steadily increased its differentiation every generation and took more and more leadership with each new CPU, such that over time, we have grown our market share in server CPU to over 40%. We're the de facto standard in the cloud, and we're viewing this 40%, as Dan will tell you, and Lisa already hinted, as just a milestone. We are far from done with growing in the CPU. But now we face today a new dominant competitor, a different dominant competitor, and a different changed future of the data center. Clearly, AI has changed the data center substantially.
The AI factory, the transformation of the data center into the AI factory, is obviously the most urgent trend to produce systems at giga scale that can efficiently train and efficiently infer to deliver business value from AI, and that's happening, of course, at unprecedented scale. We all know it. It's embedded in all of the discussions we've just had, even on the TAM, and so that unprecedented scale means performance and power in which we've historically excelled are critically important, but even more so, you need reliability and resiliency to ensure that when you're building systems at that scale, that they can actually sustainably deliver the value that the customers need. One thing we don't see, however, is we don't see a monolith in the data center. We don't see that the data center is dominated by one particular element.
Instead, we see GPU racks as absolutely central to the data center, of course, to power the LLMs and power the AI engines at the core of these AI factories. But they're useless without the sea of data that fuels AI, without the storage systems to give them the data to train, to give them the data to infer, to give them the data for agents to actually operate. And then, of course, those agents, the tools now that LLMs use, all of the databases and other systems of records that need to be accessed for LLMs to actually deliver business value run on general purpose compute racks. And so we see a rich and vibrant data center in order to deliver these AI factories. And so we've built our data center portfolio over the last few years to address these needs.
Vamsi and Dan will talk about our GPU and CPU roadmaps in just a moment and the software that makes them sing. And then I'll come back in a few minutes afterwards and unpack our networking strategy and tell you more about how we're putting all of these elements together into compelling system and cluster-level solutions. So with that, let me first introduce Dan McNamara, the SVP of our server CPU business. Thank you.
Good afternoon. So three years ago at this forum back in Santa Clara, I laid out a vision on EPYC, and I relayed that we really believed that we were a strong contender in server CPUs. And we felt like we were going to really go on a new era of growth.
I'm super excited to be standing here today and tell you that we have moved from that strong contender to truly the de facto server CPU leader in the market. Here's why. We delivered on the strategy we rolled out. Since then, we've delivered two new leadership generations of product with EPYC, with the best architecture, the best advanced packaging, and advanced processes. We've doubled down on our customer and partner network, building a vibrant ecosystem. As Mark just said, we've continued with our maniacal and relentless focus on execution in creating a predictable cadence of products delivered to market. We've built a very good amount of trust in the marketplace. Most importantly, all of this has enabled us to accelerate our path to number one server market share. Now let's look at the driving forces going forward.
Lisa and Forrest both talked about the fact that we've been on a great journey, and we're hovering around 40% share. But what's more compelling is that we've never been in a better technology and market position and at a time when the market's inflecting with AI demand. And if you think about it, it's really because we focus on the key needs of each of the segments that we operate in. With cloud, it's very well known. It's high-density performance and efficiency and driving generational TCO gains. And we've done that repeatedly through five generations, and that's driven a fair amount of share for us at this point. In enterprise, it's a little bit different. You have more performance per core, memory per core, trying to drive consolidation and modernization for the customer. And that's critical.
In addition, though, there's very beefy enterprise workloads that you must have that performance for to run the enterprise, then lastly, for HPC, it's all about flops per node, and we continue to drive floating point performance to drive key workloads like oil and gas discovery, genomics in that area, and we continue to lead in that area also, and lastly, as we talked about, AI is a horizontal, and we will continue to optimize for performance and TCO across the various use cases that I'll talk to you about. Now, let's spend a minute or two on the roadmap. And Mark hit this, right? Our foundational element for our products, every time we start in planning, is to show Lisa a couple of different things. What's the density, the performance gain, and the efficiency gen on gen? That's table stakes.
But as Forrest just said, with every generation, we make deliberate decisions from customer feedback and market demand. And you heard Mark say about Naples, we started in 2017, but with Rome, we drove a chiplet strategy that doubled the thread density and generated the highest performing CPU in the market, broke through in cloud very strongly, enabled us even in the enterprise on some high-end workloads, and obviously kept us very strong in supercomputing. And then with Milan, we doubled down on performance per core and drove an IPC gain, and we basically drove an inflection in the enterprise for us across both vertical/industry workloads and general IT modernization. Then fourth gen, we came back, and the market gave us feedback that we needed to do dual-core optimization. So we drove very high performance core for Genoa, and we also drove a very high-density and efficient core with Bergamo.
Both of those are shipping in very, very high volume even today. With Turin, we did more. We brought more density, more performance efficiency. We kept the dual cores, but we did some AI optimizations. We added to the AVX 512 block. We added a market-leading five gigahertz FMAX part for two things. One is for EDA workloads, but the other one was for CPU host to GPU nodes and clusters for either inference and training. And I'll talk about that. And then we are very, very excited to deliver Venice next year with more density, more efficiency, more AI optimizations, and driving the best performance for system one. So now let's talk about the segments we service. One sentence for cloud, the cloud runs on EPYC. And we've been at it for a while with cloud, and it started with Rome.
We've expanded our offerings and optimization points for the various different workloads. It starts with traditional lift and shift, which was really the beginning of cloud. Those workloads like ERP and email and online transaction processing, that's all about TCO. And we deliver 50% performance per dollar better than our competition. And then there's mission-critical workloads for both enterprise and cloud-native companies. Think of content delivery, think of collaboration tools, think of database and analytics. That is all about VM performance. And there, again, we deliver over 70% better VM performance than our competition. And then what's been emerging for the last three to four years is HPC in the cloud.
These are traditional HPC workloads that are being accelerated by two different products, the high-frequency SKU that I talked about, but our X3D, over a terabyte of L3 cache, accelerates simulation performance by over 2.7x and in some cases 3x. So you see, we've been accelerating with the cloud right from the beginning. But more on this, enabling all of this is enabled through our security features for hybrid cloud adoption and multi-cloud adoption, which is very, very important going forward. So all of this value Lisa talked about, we've earned the trust of industry leaders. Top 10 of 10 hyperscalers have their platforms deployed with EPYC. Largest cloud-native companies deployed with EPYC.
One of the more interesting things that I'd really like to talk about is the Fortune 500 mainstream enterprise customers are now in their hybrid and multi-cloud environments adopting EPYC and third-party IaaS faster than anyone. We've seen its 3x adoption this year. And what that does is all of that drives back to the on-prem and drives enterprise adoption on-prem so that the hybrid multi-cloud is end-to-end on EPYC. And speaking of enterprise, we've been very, very focused on enterprise for a couple of years, investing very heavily. And one of the key focus areas has been our ecosystem build-out. So I'm very excited to say that our platform growth is 3x over the last three years. We've got almost 180 platforms from racks to blades to towers to edge devices. We've got 3,000 solutions in the market on top of those platforms.
Another 3x growth from the last time we had this forum. And our OEM and ODM channel is very, very excited and has shifted most of their work to be time to market with us on every different launch. Secondly, one of the areas where we break into the enterprise is what we call industry or vertical workloads. And what I always say is these are the workloads that drive the end business. So in semiconductors, that's EDA. In telco, it's the network. And the goal there is to accelerate those workloads in either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results. And then lastly, core IT modernization. That is all about driving TCO savings.
With our latest generation, we have an eight-to-one consolidation factor from our competition that, along with our density and performance, drives up to 80% TCO savings for the enterprise. All of this is accelerating the adoption. We look at three things when we track our adoption. First is top customers. Are they using us? As you can see here, over 60% of the Fortune 100 are using us. That's growing quarterly. We track that very, very closely. The other thing is, are we getting new customer acquisitions? These are the first time with EPYC in their fleet. We've doubled that year on year. Probably the most important is the customers who have used us in the past with either Rome or Genoa or even Turin now going forward, are they using us more? Are they expanding their usage of EPYC in the fleet?
And again, we've seen double the consumption year on year. So the enterprise conversation has changed quite dramatically. And we're very, very excited to see that change because we've been investing very heavily here for the last few years. Now, HPC, it's sort of our legacy from Naples, right? We've been winning over the years with EPYC and even Instinct across national labs, research institutes across some of the most important workloads like genomics and oil and gas. And now it's shifting towards AI for science. We are in a third of the top 500 supercomputers. We're 12 out of the top 20 green supercomputers. And we continue to evolve our solutions. And you're going to hear about this.
As HPC evolves to AI for science, and Vamsi will talk about this with our MI430 and MI450 in Venice, we will continue to drive more and more performance there, but also provide our customers the much-needed silicon diversity and open ecosystems to really participate and enable this market going forward. Now, with AI, I did want to spend a couple of moments here because I believe there's a thesis out there, and Lisa mentioned this, that CPUs have been cannibalized, and I want to spend a minute and tell you what we're seeing because it's the exact opposite, and there's a number of different reasons and applications why this is happening, and first, probably the most well-known is the head node, right? The CPU host to an eight-way server, GPU server.
What we did there is we saw very early on, and we built a Turin product to drive very, very high frequency solution. The reason is the server in this mode, in the cluster, is really more of a synchronizer and orchestrator. It doesn't have the biggest job. But if you can drive high IPC and high frequency and keep those GPUs fed with kernel launches and all the orchestration, you can drive overall performance up on the cluster. We've showed that over and over across both training and inference. Secondly, probably the biggest conversation I have through the course of a week is just creating an AI-ready data center. That's driving consolidation. That's driving energy efficiency. That's creating more space. As our CIOs are our customers, we're trying to drive how they go develop an AI infrastructure and modernize for the future.
This is front and center. How do you do all this, but also enable those most critical enterprise workloads to boot, and then the end-to-end AI pipeline is also expanding quite rapidly, and you have pre and post-processing that's happening, and it's driving all sorts of new workloads like RAG and ETL and new storage workloads and then database, more analytics, so all of that today in the enterprise is on x86, and then CPU inference, with the explosion of SLMs being fine-tuned for domain-specific applications, the CPU is now playing even a bigger role in inference because it's a cost-effective way to infer on a smaller model. It also is in a mixed workload environment because CPUs actually have multiple jobs with our high core counts. Leveraging the extra core counts there for inference, real-time inference has gone extremely well. We have many, many examples of that.
And then lastly, agentic and autonomous agents. And Mark mentioned this, but as the agents grow, more and more CPU cycles are needed to drive the interfaces between the agents and all of the real-time enterprise applications and the enterprise-wide structured and unstructured data. It's critical to drive this interconnect. And you've all probably heard of the concept of an MCP server. So see, all of these five applications and usage models for CPUs are driving what we believe is a very strong inflection point in CPU TAM. As you look here, we're going to add $30 billion in CPU TAM by 2030. It's basically doubling the CPU TAM in this horizon. And as you can see right here in terms of 2025 to 2026, the inflection is right now, right, with the agents growing, with SLMs and all of the things we've been talking about on inference.
So very exciting for us because I'm coming back to what I said earlier. We have never been in a better position from a product standpoint with Genoa, Turin, and headed to Venice. And then speaking of Venice, we're going to launch that next year. It's on two-nanometer TSMC process. It's looking great. Our ODM, OEM, and cloud partners are all bringing up systems and looking very, very good. We believe we will have the biggest ecosystem at launch next year when we bring that out. And not only does this maintain our leadership, this actually widens the gap from our competition. We cannot wait to launch this product next year because it will absolutely be part of that curve that I just showed you. Now, let me close with where I started.
The execution of our strategy has got us to this point, and we are in a great position to continue the acceleration of the leadership. We will continue to drive our cloud expansion. We're hitting the tipping point in enterprise in that we expect to grow, and we are extremely well-positioned for this AI growth, and there's not a question in our mind that we have a clear path to greater than 50% market share, so thank you, and I'd like to introduce Vamsi to the stage.
Good afternoon, everybody. I am super excited to be here to talk to you about all the progress we're making on our AI strategy.
Now, one of the real privileges with my job is that I get to talk to AI innovators across the world, people that are doing incredible things, inventing new drugs, solving new math that has not been solved before, tackling really, really hard problems. And as we've made progress with our strategy, what's been really amazing for me to watch is that more and more of these breakthroughs are actually happening now on AMD systems. Whether it's researchers at OpenAI or developers at Meta or enterprises like Tesla or numerous startups that are doing exciting things on our platforms, everybody's now proving that they can do their most advanced AI development on AMD platforms. So what I want to do is tell you a little bit about our strategy that's making all of this possible. Our strategy is based on three core pillars. First, delivering leadership roadmaps.
Second, providing world-class open-source software. And third, establishing deep partnerships with the leaders that are defining the future of AI. And as we made progress executing to our roadmap, as the open-source community embraced AMD, as we established the right routes to market, more and more adoption came. So I want to give you guys a little bit of a sense for our progress with a few examples. Just 18 months ago, if a user wanted to go develop on top of an Instinct platform, well, there's no place in the cloud they could go to. Fast forward to today, we have like a dozen clouds now. We have 35 platforms that carry AMD GPUs. Similar story on the software side. A couple of years ago, if you went to the leading AI frameworks and said, "Hey, I want to develop on AMD," support was not guaranteed. We fixed that.
We worked with all the top AI frameworks systematically and said, "Let's support AMD GPUs." A good example is the partnership we set up with Hugging Face two years ago. At that time, Hugging Face model support for AMD GPUs was not guaranteed. But we worked with them to make sure that that is fixed. And today, all of the Hugging Face catalog of models run AMD GPUs successfully. So as we made progress, more and more workloads came and customers started adopting, whether it's Microsoft with their GPT model or Meta with their recommendation models or X with their Grok models. Customers are trusting their most important workloads today on AMD platforms. So now, following the success of our MI300 products, this June, we launched our MI350 series of products to market.
These GPUs were launched to address the huge demand in compute that existed and brought with them innovations like 4-bit compute, the next-generation memory system that allowed AMD to stay at the leadership level on memory capabilities. Most importantly, we made it super easy for people to migrate onto this platform, both from a hardware perspective, system configuration-wise, but also from a software perspective. That's why the ramp for these GPUs has been smooth, and they're delivering excellent performance. As you look at inference serving on leading models like GPT, OSS, or Hunyuan Video, these GPUs deliver excellent performance. To look at how easy it has been to adopt this platform for critical workloads like GPT, when it first came out in August, the latest GPT OSS model, it ran on day zero on MI355, just a few short weeks after we went to production.
We introduced support for 4-bit processing and delivered excellent performance. As you see on the charts here, this is also a very good training machine, particularly for workloads like fine-tuning. Now, all of this performance translates into true economic value. When you look at metrics like cost to serve a million tokens, this is actually an example from a recent Inference Max benchmark that was conducted. The latest GPUs, MI355, on the GPT OSS model deliver up to 10x benefits in terms of the cost to serve tokens relative to our previous generation of GPUs. Now, this is exactly the type of economic value and gains that our customers expect generation over generation. Our platforms have made progress and expanded in their capabilities, so have the workloads that have run on them.
As you might remember, our initial focus had been on inference that leveraged the strengths of the MI300's memory system, with GPT being one of the early models. But since that time, inference deployments have expanded in terms of workloads. We have added recommendation system support. There are multiple modalities that are supported and coding agents that are supported. So a number of inference models and workloads have now come onto the platform. We've also steadily expanded our training engagements. Many customers, Microsoft is doing MOE training on it. Meta is doing recommendation training on it. Our own teams are building large language models. And many exciting startups like Essential, like Cohere, like Zyphra, they're all training their next-generation model architectures on AMD platforms.
Just to give you a sense for how the customer journey has been in terms of workload expansion, I want to tell you about what happened with the one hyperscaler engagement. They started, obviously, with the first workload, very successful, and just within 12 months, within one year, they now have over 70 workloads spanning all of what I talked about that is now running on Instinct platforms. We're super thrilled that they're able to do these sorts of things. Now, as much as all of this adoption has been because of our compelling roadmaps, it's also been because of the tremendous gains that we've made building our software capabilities, so I want to talk a little bit about that, so if you look at ROCm, with each release, as we have added significant features and performance, that's what has been crucial to supporting all these workloads.
We also recognized that we needed to be relentless in our focus toward developers, improving out-of-the-box capabilities, providing more richer collateral content documentation that makes them productive right away. Our customers are actually delivering AI capabilities at unprecedented pace. We responded to that. We've accelerated our release cadence. New optimizations, new capabilities now ship every two weeks in Docker and prepackaged containers. We have ensured that the leading models are supported on day zero. When DeepSeek came or GPT OSS came or the latest Qwen or Gemma model came, they all ran in day zero in the last several months. So adoption of the platform, because of all of these improvements that we have made, has been growing significantly, and that momentum is showing in the numbers. Two million models that exist on Hugging Face today all run AMD GPU platforms successfully.
The top 10 AI frameworks and projects for GPUs all have very good AMD GPU support. And if you look at metrics like downloads, they are on a fast exponential. Compared to last year, we have 10x the amount of downloads. There is a growing and vibrant ROCm ecosystem. And we are doubling down on developers and the developer community even further to make it go even faster through collaborations that we have set up with research professors at Stanford and at Berkeley, through rich educational content that we are delivering in partnership with organizations like Deep Learning. We've also stepped up our engagement with the developers with a number of forums like hackathons, contests, and meetups, and so on. If you see in the last year, we have significantly stepped that up.
An outstanding request that our developers have had is, "Hey, can I go to a place where I can go hack on AMD GPUs?", and so we provided in June this year a developer cloud. It's been very successful, and we are committed to expanding that even further. Now, all of this progress, obviously, is not happening as an accident. It's because we are driving a clear and deliberate strategy. This strategy is actually pretty simple. It's based on two core principles: build with open source and build with the right levels of abstraction, so let me talk about both of them. When I talk about building with open source, open source is where we have scale, and it also moves extremely fast, and that's where AI developers live today, and that's why we have chosen to do a lot of what we do in ROCm in the open.
Pretty much our libraries, our framework codes, it's all out in the open, allowing people, customers, and developers to contribute to ROCm. So that's been a big plus for us. We also leverage the trend towards rising abstractions. Let me give you an example of what I mean by that. I'm a computer science guy. I have a PhD in computer engineering. If you give me one of our chips that has like 100 billion transistors and said, "Hey, write some assembly code," it's very, very painful to do that. Engineers don't like doing that. That's what has led to programming abstractions like PyTorch, right? People like to write higher-level code and make them more productive. So that's what we have done. We have enabled very strong support for all these frameworks that are at the right levels of abstraction. That is the trend that is going to happen.
The happy byproduct of that trend is once such code exists, it's nicely portable across platforms, and we are benefiting from that. But the thing that I'm really excited about our software strategy is actually what's coming next. Over the past several months, many times my engineers have come and told me that, "Hey, we've used AI to write this GPU kernel, and it's shockingly better than anything that we were able to create before." So what is really happening is the enormous gains that AI has provided to general-purpose programming is now coming to GPUs. While it's not yet fully there today, I am deeply, deeply convinced that a significant amount of GPU programming is going to be transformed by AI. A lot of GPU programming will be done by AI. And it will take down with it any last remaining barriers for the adoption of our platforms.
So now, as we've made a lot of progress with our platforms, with our software, we have learned a lot from our customers about how their workloads are evolving, about their roadmaps. We have literally earned a seat at the table as they are designing and planning their next-generation systems. And we have incorporated those learnings, those precious insights into creating our best GPU family, the MI400 series. This is a defining moment in our AI roadmap journey. Built on top of the latest in-process technology, it leverages all of our innovations that Mark talked about with chiplet architecture and 3.5D packaging. The MI455X product in this family packs up to 40 petaflops of peak FP4 compute. It has 432 GB of HBM memory running at 19.6 TB per second.
It is the best and most advanced AI accelerator that we have ever built, designed to serve trillion-parameters models that are the reason the next generation of AI infrastructure is getting built. We've also synthesized all we have learned about our hardware, about our software, about our networking, and about how we bring it all together into a rack-scale infrastructure with our Helios offering, which Forrest is going to unpack after me. This rack-scale architecture is designed to service the most demanding training and inference workloads. With its leadership memory system, strong compute performance, and rack-scale architecture, this product is designed to offer leadership performance and true TCO value proposition, truly differentiated TCO value proposition for our customers. Now, the MI400 series is more than the MI455 product.
Even as AI has continued to transform markets, there has been a huge explosion in demand for scientific computing, driven by nations seeking self-sufficiency in their compute, and also research institutions and enterprises that want to make scientific breakthroughs. To address this exact need, we have built the MI430 product. So what the MI430 product is actually built out of the exact same hardware and software foundation of the entire MI400 series. It leverages our chiplet architecture, but it customizes the compute for what these markets need and provides the high-precision, double-precision floating-point compute that is required for getting the highest performance for these markets. This product continues the leadership tradition that we have. We have the fastest supercomputers today with Frontier and El Capitan, and the 430 series is designed to continue that into future supercomputers. You can stay tuned for more around this project.
We're going to share a lot around this product. We're going to share a lot more about this at the upcoming Supercomputing Show next week. So now, the MI400 series, our customers love it, right? Whether it's frontier builders like OpenAI or Meta or infrastructure partners like Oracle or research institutions like Oak Ridge, there's been significant interest and strong adoption for these platforms. Lisa talked a little bit about our engagements. I want to give you my own perspective of how this has gone. Let's start with OpenAI. A couple of years ago, when we first engaged with OpenAI, it was not clear how their workloads were actually going to get mapped to AMD GPUs.
So we decided, and we sat with Don and said, "Hey, it's super important that we actually support Triton, which is a chosen framework that they use," and said, "We need to add AMD backend support for that." So we enabled it functionally, and over the last year, we made it performant with adding a lot more features and capabilities. And as they gained more confidence, they've expanded their engagement with us from hardware, from software, from networking, and that led up to the MI400 series. So we're super excited to see what they would be creating with this product. We're also deeply grateful for our engagement with Meta, starting with our MI300 series with Llama, where they served and showed that it's great for that reason, and expanding that to recommendation models.
We've collaborated with them very actively with PyTorch and deep collaboration with Helios that Forrest is going to talk about. When AI leaders that are defining the future of compute choose AMD, it absolutely validates our roadmap, our ability to execute, and to be able to be deployed at scale. Now, as amazing as the MI400 is, what seems to be even more extraordinary is the demand for compute keeps going up. As models become larger, as models become stronger, as they become more diverse, they become agentic, they think, they interact. As industries adopt them more, there is continued need for a lot more compute. To meet this extraordinary demand for compute, we are introducing our next big leap, which is the MI500 series.
With significant innovations in compute, memory, networking, rack-scale integration, and even deeper hardware and software co-design, this is not an incremental step on our roadmap. This is going to be our next big breakthrough in AI performance. So now, to bring it all to a close here, we are at a truly unique moment in our AI journey. Our leadership roadmaps are getting stronger, delivering greater gains with each generation, expanding our usage. Our open platforms with our rack-scale systems and open-source software are driving greater adoption. The deep partnerships that we have set up with the AI leaders that are defining the future of computing are showing what is possible when you collaborate at scale. And the MI400 series is poised to place AMD in the middle of some of the largest AI infrastructure buildouts that will define this decade.
Together with our partners and customers, we are thrilled to be able to build the future of compute. Thank you.
With that, let me welcome Forrest back to the stage to tell you how it's all going to be put back together at the data center level.
Thank you, Vamsi. As Dan and Vamsi showed you, we are committed to driving leadership in the CPU, GPU components, and in, again, the software that makes them sing. Now, I want to talk to you about how we put it all together with the networking and, of course, with the system design at the rack and cluster level to truly build complete AI factories. Let me begin with networking.
Networking, as much as GPU and CPU now drives the performance of AI factories, the amount of data that has to be moved around to feed AI, to act on the results from LLM engines is absolutely incredible. You have to have the networks that can handle it. The networks needed for an AI factory have rapidly evolved from the old days of the single network. We now see that you have to have a front-end network and up to three back-end networks to effectively scale out the GPU systems to reach gigascale. Our approach to providing these networking solutions is a little bit different from our competitors. First off, we believe in open. AMD always has. We believe in open standards. We believe in open ecosystems.
And so at AMD, we've been thrilled to help spearhead some of the most significant new networking standards over the last few years with the Ultra Accelerator Link and, of course, with Ultra Ethernet. Ultra Ethernet is a great example of a standard that has rapidly made Ethernet the premier, unquestioned, best scale-out fabric in the industry. And we've done this in conjunction with other leaders in the industry. And so offering our customers choice and enabling others in the ecosystem to add incremental value to the AMD open ecosystem of AI systems. Well, let's begin diving deeper with the front-end network. And the front-end network is an underappreciated part of the puzzle. But in reality, without the front-end network, the user can't reach the LLM. The LLM or the AI agent cannot reach the data, the databases, the resources that it needs to actually turn tensor operations into intelligence.
A good front-end network can do more than just connect. It can actually add performance to the overall system, to accelerate AI and cloud workloads by offloading key tasks: software-defined networking, storage access, storage abstraction, and, of course, security. Security, as Mark mentioned earlier, is incredibly important in the era of AI. Networking can play a huge role in ensuring that systems are secure end-to-end, such that users' data and the AI models are always protected. At AMD, we have a fantastic solution for front-end networks. The Pensando team at AMD has built the world's best GPU. We're now in the fourth generation of this processor. It has a unique architecture using P4 programmable engines that allow us to do stateful and stateless services of any nature for traditional SDN and security offloads, but also to provide services to offload specific data movement actions for the GPUs.
It does all of that at line rate performance. With the Pensando approach for both the front-end, as well as I'll describe the back-end networks, you can actually deliver network innovation, continuously evolving at the speed of AI without giving up performance. Now, in the back-end networks, I mentioned there's three. There's three that we are talking about today. You have the scale-up network that takes pods of GPUs, dozens to a few hundreds today, and welds them together into effectively a logical GPU that can coherently share memory and results in a very efficient fashion and look like a large resource, large single GPU resource. You also have the scale-out network that is not as tightly connected, but allows you to efficiently scale, particularly for training, out over hundreds of thousands of GPUs.
Now, the new concept, candidly, over the last year or so, is the concept of scale across, where even scaling out to 100,000 GPUs is not enough. To reach gigascale, you have to bust out of the walls of a single data center hall and federate data centers or data center halls in an efficient, effective way. For scaling up, that welding together of dozens to hundreds of GPUs, what really matters is you have to have an ultra-low-latency connection. You have to have ultra-high bandwidth. And you have to adhere to the data transfer protocols that GPUs want to talk. You don't want to put any friction in translating from the way that GPUs compute and want to communicate into the protocol that you're using to do so. And of course, you have to have not just a scalable technology to bring out to hundreds of thousands.
Above all else, you have to have reliability because you can't have a link flap or a single link going down, bring down your entire pod of GPU, perhaps ruining hours of training time or disrupting a critical distributed inferencing task. At AMD, with the MI450 and with Helios, our solution to scale up is open. We are implementing the Ultra Accelerator Link protocol that efficiently provides load store semantics to the GPUs to allow them to efficiently communicate. We are transporting that protocol at 260 terabytes a second across a Helios pod of 72 GPUs via packetized Ethernet, Ultra Ethernet. It's very similar in concept to the emerging standard Ultra Ethernet, where we're trying to packetize various protocols using Ethernet to transport.
In Helios, we then have the scale-up fabric linking all of the GPUs together over six redundant network planes to, again, provide that not just bandwidth and performance, but resiliency to ensure that a slight perturbation in the network doesn't bring down the pod. For scale out, we connect, again, hundreds of thousands of GPUs to work as one. This is an area where the industry really has innovated together with Ultra Ethernet and with similar efforts to efficiently provide advanced RDMA networks that improve performance and, very critically, reliability over either InfiniBand or RoCE v2. More than this, the AMD Pensando NICs that we use to connect to the scale-out networks with their P4 programmability offer us additional opportunities to accelerate performance, to offload communication tasks from the GPUs, to offload collective tasks from the GPUs, to improve compute resource utilization, and therefore get workload acceleration.
They also allow us to innovate new features like multiplane support and advanced packet transport handling to further improve network TCO. We've seen the multiplane support offering opportunities to reduce the scale-out networking TCO by 50%. And then, of course, you're scaling across. Now, this is where things get a little bit different. If you're scaling across to multiple data halls or across tens of kilometers, the latency and reliability of those connections is different, very different from within the data hall. And so the network has to be able to accommodate dramatically varying latencies and network perturbations. So you have to do adaptive path management. You have to do dynamic load balancing. And there's a variety of approaches that have been recently promoted to do this. At AMD, we think the intelligence to do this really is where the data is generated and consumed in the node itself.
We have implemented scale across and begun delivering it to our customers already with the existing Pollara 400 NICs for MI300s in the NIC itself. We can effectively deliver scale across functionality and performance without relying on proprietary or expensive switch extensions. The NICs that we have to provide these scale-out and scale across features are our Pensando Pollara 400 and Volcano NICs for the MI300 series and the MI400 series. True to our commitment to open, we also support customers to have choice. We support, amongst others, Broadcom NICs as well. The Thor series of NICs work very well with AMD GPU solutions. The Thor 2 with the MI300 series and forthcoming the Thor Ultra series of NICs, which fully implement UE, also work extraordinarily well inside of Helios. Our system is not closed.
So let's put it all together. Scale across means multiple data centers can be confederated together. Scale out means you can combine hundreds of thousands of GPUs together using advanced Ethernet. And then diving into the rack, as Mark showed earlier, we're using UAL and Infinity Fabric to interconnect the CPUs, GPUs, and scale-out NICs within the compute blades. And then we have Ethernet-based scale-up switch trays that take the UAL OE traffic and switch them with very low-latency, high-performance Ethernet-based switches. Those are interconnected via redundant cable cartridges that have short lengths and provide high-reliability redundant connections interconnecting all of the switches and the CPU compute trays. Finally, the front-end network and the scale-out network emerge from the Pollara or Thor NICs and interconnect to the rest of the data center systems.
You put it all together and you get Helios, which, as Vamsi said, we believe will be the most performant, most efficient, most serviceable AI factory building block when it comes to the market in Q3 of next year, incorporating all of our technology and doing so in an open way. But just as we will continue on annual cadence for the components, we will continue evolving our rack-level systems on an annual cadence as well. And so accompanying the MI500, we have the yet-to-be-name-revealed rack that we'll unpack more later. But again, it will offer more. It'll offer more GPUs, more bandwidth, more capability, more performance, and the next generation of our components. So stay tuned on that. Important to understand for both of these, AMD is not selling the rack systems.
We are developing the complete solutions ourselves and in conjunction with our partners, and then bringing them to market through leading OEMs, through our ZT partners who are now owned by Sanmina Corporation, and of course, through our other ODM partners across the ecosystem. And so they will all have time-to-market solutions for MI450 when we bring it and Helios to the market. And so let me end where I began. We set out a decade ago to drive data center leadership. We've made tremendous progress on the CPU side. We've made tremendous starting progress on the GPU and the networking side. But with the capabilities and the teams that we have in place, with our strategy and our partnerships, we are highly confident that we can build the best AI solutions for the AI factories of the future out for the foreseeable future.
And when we think about the $1 trillion TAM for AMD, which is, by the way, that's a silicon TAM, just so we're clear. That's not to be compared to some others that might talk about an overall solution TAM. But the $1 trillion silicon TAM, we're very confident in our ability to address it. We've already grown the AMD data center business from essentially zero when we began this journey 11 years ago to over $16 billion expected for this year. That's a great growth. But we believe with our portfolio that growth can accelerate. And as Lisa already mentioned earlier, we believe that we have a clear line of sight to 60% CAGR over the next three to five years for the AMD data center business writ large, which generates, you can do the math, over $100 billion in annual AMD data center revenue.
It's an incredibly exciting time for our whole team, and we are so, so pleased to be part of this journey. Thank you very much. Let me now turn it over to Matt, who's going to take us to the break. Thank you, Matt.
All right. Thank you, everybody. Hopefully, that was an informative session. We are going to take a break. We were running just a hair behind. I think Jack is going to come on and talk about our client and gaming businesses. We are going to do that at 3:30 P.M. So it'll be a little bit of a five-minute delay from what we had put in the agenda, but we wanted to give you guys time to get through the restroom queues and all that fun stuff. So back on here at 3:30 P.M., and thank you very much.
Please return to your seats.
The program will continue soon. Please return to your seats. The program is resuming now. Please welcome back to the stage, Matt Ramsay.
Thank you very much, everybody. Maybe we'll just. I think there's some people over here still finding their seats, so we'll just give it a couple seconds. I have no other entertainment to offer, so I apologize. Anyway, thank you, everybody, for following instructions and getting back to your seats so we can start close to on time. Hopefully, you enjoyed the first few sessions around Lisa, Mark, and the data center guys. I think as we move forward here, I'm really excited to welcome to the stage our head of our computing graphics business, Jack Huynh.
He's been running that business for about two years, and I think you guys might have noticed the difference in the results of that business in the two years he's been running it. So come on stage, my friend. Lots of good stuff to talk about. Thank you, Jack.
Good afternoon. It's great to be here in New York to show how AMD's computing graphics business is evolving and to share with you where we're headed next. As Matt and Lisa said, I stepped into this role two years ago, and we unified the client and gaming organization into one business. Today, I'm going to show the momentum and, more importantly, the path we're on towards market leadership, and I can't think of a better way to share my excitement for this business than to be here today.
We have transformed the business in the past two years, and we achieved our market growth through a leadership product portfolio, and we've deepened our relationships across the entire ecosystem, from OEMs to software developers. We've also accelerated our time to market and drove R&D synergies across our client and gaming organizations. And this strategy delivered a 50% year-on-year increase in revenue, from $9.6 billion to over $14 billion in just this past year. And it all starts with building a great product portfolio. We have built the world's best desktop portfolio in the industry with our X3D technology. We also have the top-selling gaming CPUs. In mobile, we have the best AI PC portfolio top to bottom. At Enterprise, we are accelerating our commercial share growth. We have the world's best workstation platform with Threadripper.
And early this year, we added Dell as a new enterprise partner with a top-to-bottom AI PC portfolio. And we have a partner-first mindset. We design and bring the right products at the right time. And we align our roadmap with our customers' long-term, multi-year strategic needs. And we're building a world-class go-to-market operation to further scale the business. And there's no one better to tell our story than our end customers. And the response has been very strong. Large enterprises are converting to AMD for the very first time and seeing the benefits. Over 50% of Fortune 100 companies have already deployed Ryzen PCs. And we've also expanded our footprint into the public sector. I'm very pleased with the incredible results we have been able to deliver. We have a systematic and strategic approach towards business growth. We have doubled the business in just two years.
And we've been able to change the economics of client from value to premium, growing our ASPs 50% in the past two years as well. At the same time, we've almost doubled our market share while growing our ASPs to a record 28% revenue share. I love clients, but I'm also a lifelong gamer. And I'm incredibly proud to win consecutive generations of consoles with Microsoft Xbox and Sony PlayStation, which is unprecedented in the industry. We also created a new category, a AAA gaming handhelds, enabling the entire catalog of PC games to be accessible on a handheld device, driving a new vector of growth for our gaming business. In consumer graphics, we focus our strategy on delivering the best mainstream GPUs in the marketplace, offering the best performance per dollar. And our graphics architecture is highly scalable.
It allows us to address a large range of segments in both client and gaming. And we're working very closely with our software developers to reduce the time of development of all future games, creating massive new worlds with more immersion, more detail, and more exploration. And we're ready for this transition. All this growth we just talked about is still in the early innings of the AI PC acceleration, but we're building towards something even bigger. We're entering a new era where AI is happening everywhere, running directly on the devices that you use every single day. Over the next several years, we expect to see over one billion end-user devices powered by AI locally, which is why we're infusing AI into everything we do across client devices, gaming, and broader compute solutions.
We have the scale to drive the AI PC inflection to the market with the right partners and the right platforms. We believe that AI will fundamentally augment the value that the PC can deliver. You have thousands of interactions with your PC every single day, and AI will be able to understand every personal interaction, bring automation, deep reasoning, and personal customization to everyone. What AI did to the cloud with large-scale design automation, AI will bring to the endpoint with low-latency task and productivity automation. In the education space, AI has the potential to be a personal tutor for every student, an assistant for every teacher. The learning process is highly personalized, and AI will be able to adjust for every individual. Let me show an example of how fast the AI ecosystem is moving.
Just 18 months ago, the Llama 3 model was introduced, amazing the world with its AI capabilities. At over 70 billion parameters, it was a model that could only be run in the cloud. Fast forward to just a few months ago, and similar models can now run locally, more efficient, just as performant, and just as accurate as the bigger models. This fundamental trend is going to change and continue and will only accelerate the AI PC value. We have a very aggressive strategy and a very ambitious roadmap. Our AI PC strategy starts first with building a great PC, a better PC, the perfect PC, and then we add the best AI architecture and capability to it. We have a no-compromise PC strategy, and we're investing ahead of the curve to capitalize on the AI PC value creation, to lead the industry through innovation.
The future of AI PCs will be built on AMD, and we're aggressively accelerating our client and gaming with an AI-first mindset, but it doesn't stop at the endpoint. We see AI expanding beyond the endpoint, and the same problem statement we're solving for endpoints also exists at the edge. The explosion of data by endpoints creates demand for AI at the edge. Besides the need for confidential individual AI assistance, there is a need for confidential enterprise-level AI assistance. This is where edge AI can provide capabilities to deal with the large amounts of data produced by endpoints and extract decision-making ability at the enterprise level. For every scale of model, there's an optimal hardware to process it, and we release products spanning tens of billions to hundreds of billions of parameters models.
We create a new product category with Ryzen AI Max, a mobile AI workstation with 120 gigabytes of unified memory across the CPU and GPU, with full CPU and GPU coherency. It also doubles as a supercharged small form factor AI device, supporting up to 120 billion parameter models. We also recently launched our new and first Ryzen AI PRO enterprise cards designed for scalable multi-GPU deployment, and it's a very cost-effective solution for large AI models available today. We also leverage our ROCm ecosystem from endpoint to cloud to provide a seamless developer experience. And we're deepening our R&D investment to go after this opportunity as we expand our AI portfolio. The opportunity in client and gaming has never been greater. We have a very consistent strategy, as Lisa outlined earlier today, building great products, leading the ecosystem, driving operational efficiency.
Our focus is to strengthen AMD's market leadership and drive sustained growth. We have a strong leadership roadmap, a deep ecosystem partnerships, and a proven track record of execution, and we're confident in achieving revenue growth at more than three times the market rate while expanding our market share to over 40% in the next three to five years. We've built tremendous momentum, and we have a clear path to market leadership. Now we're entering a new era. AI is transforming the PC experience and redefining what compute means across every device in our portfolio. This is not just another product cycle. It's a once-in-a-generation shift towards expanding our opportunity across every segment. Our next chapter is about scaling the client business, deepening our console advantage, and unlocking new growth with AI at the edge.
If I can leave you with one thought today, it's we are ready to lead the gaming and AI PC era. Thank you, and with that, let me introduce my colleague, Salil Raje.
Thank you, guys. Thank you, Salil.
Let's talk AMD Embedded. What's happening inside AMD Embedded is one of the biggest transformations at AMD. I bet most of you haven't been thinking much about the AMD Xilinx acquisition since it closed in 2022. But it still stands as the largest, and dare I say, the best-managed acquisition in semiconductor history. I'm both honored and humbled by the response of taking that historic business and turning it into long-term value for AMD and for the customers we serve. We've come a long way. Let me share with you what we've been up to and where we're headed next.
The scope of AMD Embedded has increased tremendously over the last three years. What Xilinx brought to AMD was leadership FPGAs, adaptive SoCs, a whole host of strategic IP, AI engines, high-speed interfaces, but just as important, thousands of loyal embedded customers. AMD delivered high-performance compute IP, CPUs, GPUs, APUs, plus scale, manufacturing depth, and advanced packaging technology. We are now leveraging the strengths of both companies to deliver integrated end-to-end solutions for our customers. We are unlocking new vectors of growth and winning designs that neither company could have won alone. Since acquisition, we expanded the scope of Embedded significantly. We increased our portfolio to go now from cost-optimized products to high-performance Versal-class products.
But the Embedded business has been increased beyond FPGAs to now include Embedded X Series 6, where we leverage CPUs, APUs, GPUs to semiconductor silicon, where we are focused on strategic customers for their high-value engagements, to physical AI as cloud intelligence moves to the physical world, to the real world. And this is allowing us to tap into $30 billion of TAM. We're continuing to win in our core FPGA business, the adaptive compute business, at the same time expanding into new spaces, leveraging AMD's IP portfolio and global reach. Prior to acquisition, our portfolio was narrow. On the adaptive side, we focused mostly on high-end FPGAs. Our portfolio in the Embedded X Series 6 space was also quite narrow. That meant we had clear gaps in key markets. That limited our reach and constrained our growth. But since acquisition, we expanded our portfolio significantly.
On the adaptive side, we now go from cost-optimized Spartan-class products to high-end Versal-class products, and that has allowed us to go after markets such as vision, robotics, and all the way to aerospace and defense, and this is also enabling us to grow at a rate faster than the baseline FPGA growth. On the Embedded CPU side, we've taken AMD's dormant CPU business and created a more compelling and competitive product portfolio, and now that's allowing us to go after markets such as industrial, networking, and storage. We have launched 14 new products and now winning designs and building the foundation for semiconductor silicon and physical AI. This is the strongest and broadest and most competitive portfolio that we have ever had in a strategy that is working. Across FPGAs and Embedded X Series 6, our design momentum is exceptional. We're continuing to double down on adaptive compute.
Still our crown jewels, now broader, stronger, and more competitive than ever. On the Embedded X Series 6 side, we have a highly leveraged, purpose-built product portfolio, and that is allowing us to go after newer markets, newer applications, new customers. Our Embedded sales team and our channel partner organization have deep relationships with thousands of customers. We are now taking that broad portfolio of products and working with these customers and winning market share across all our markets, and that is synergy that is paying off. Our design and momentum is phenomenal. We won thousands of designs totaling $36 billion plus, and that is execution. That's customer trust, and that's how winning in Embedded compute. As Lisa mentioned and Mark also talked about, AMD Embedded is now in the semiconductor silicon business.
Semiconductor silicon business will be one of the largest growth vectors for AMD Embedded, and it's accelerating fast. We are not just going after every ASIC. We are very focused on a few strategic customers with high-value engagements, where these customers are co-developing with us because of our application knowledge, our IP, and our execution. We are leveraging the industry's broadest IP portfolio, our FPGAs, our X Series 6 CPUs, ARM SoCs, GPUs, APUs, NPUs, DPUs, RF technology, and our advanced packaging technology. Our financial strength and our scale is allowing our customers to put their entire roadmaps onto our platform. And that's really the killer combo that is allowing our customers to be anchored to us for their mission-critical multi-generational products. We have secured a significant number of design wins through semiconductor silicon business, $15 billion across automotive, data center, aerospace defense, and wireless.
In the years to come, semiconductor silicon business will add meaningful revenue to AMD Embedded, scaling up to almost a third of our business over the long-term horizon. Semiconductor silicon business is one of the clearest examples of how we have transformed from a focused FPGA business to creating a broad compute platform and having entirely new growth engines. Now, putting it all together across FPGAs, Embedded X Series 6, semiconductor silicon, our design and momentum has never been stronger. Prior to the acquisition, we grew our design wins at single digits. Since the acquisition, we have increased the pace and scale of our design wins and now growing to strong double digits. In 2024, our design wins was about $14 billion. This year, in 2025, we already surpassed that record, and we are now scaling up to $16 billion.
This design and momentum is a direct result of our execution or broader product portfolio, but also, frankly, because we can now put end-to-end solutions in front of our customers, and this is a pipeline that's booked and now positions us well for meaningful revenue acceleration over the next five years and beyond. Speaking of revenue, last two years, our business was a bit soft. The softness had to do with an inventory correction. Most of our customers were draining inventory through their entire supply chain. Now, that phase is behind us, and growth is returning to our business. In the future, as part of AMD, our trajectory is completely different. In the near term, we'll grow our revenue at 2x the market rate, scaling up to 3x the market rate over the long-term horizon as semiconductor silicon and physical AI start to ramp.
We're also tracking to 70% market share in our core FPGA business as we continue to extend our leadership and start gaining more and more market share in that business. So this is how AMD Embedded shows up in the P&L. Market share growth in the near term is a clear path from revenue growth, 2x the market rate, to 3x the market rate over the long-term horizon. Everything I've covered so far, from product portfolio expansion to design and momentum to semiconductor silicon, is getting AMD Embedded ready for what's next, and what's next is physical AI. Physical AI will be the biggest transformative vectors for AMD Embedded. Intelligence will move from cloud to the physical world to the real world. Billions of intelligence systems will be deployed across all industries in applications such as autonomous driving, robotics, and drones over the next decade.
These intelligence systems will perceive, decide, and act instantly. The physical AI market is expected to be $200 billion plus by 2035. AMD is uniquely positioned to win in this market and gain significant market share. We're already working with many of these customers in these applications. AMD devices are already used to power many of these applications in Sensor Fusion, in Vision, in Control Logic. We are building upon this foundation for whatever form physical AI takes because we intend to lead it. Unlike in cloud, which mostly focuses on inference and training, in physical AI, we had to optimize the entire pipeline from perception to decision to actions, all happening in milliseconds. Safety, privacy, and real-time performance are important considerations in physical AI.
AMD is the only one that can optimize across perception, decision, and action, perception being Sensor Fusion, decision being latency inference, and decision has to do with real-time processing. For Sensor Fusion, we have hardened Sensor Fusion IP for standards-based sensors. We have our adaptive fabric when customization is required. We have, for decisions, for low-latency inference, we have either FPGAs or NPUs. When large models and big flops are required, we have our GPUs. For action, we have X Series 6 CPUs, ARM SoCs for real-time processing under safety constraints. We can mix and match any of these IPs to create purpose-built products for each of the applications. The bottom line is, while our competition is mostly focused on inference and providing multi-chip solutions, AMD is the only one that can optimize across perception, decision, and action on a single embedded platform. I've covered a lot today.
If there's one takeaway I want you to have, it's this: AMD Embedded has become a high-margin growth engine for AMD. We used to be a focused FPGA business. We are now a multi-dimensional powerhouse built upon adaptive compute, Embedded x86 Series, semiconductor silicon, and physical AI, each a growth engine on its own. Our momentum will make Embedded a defining chapter in AMD's success story. Thank you. With that, I want to welcome our CFO, Jean Hu. Hey, Jean.
Thank you, Savio. It's great to see everyone. Thank you so much for joining us today. You have just heard from Lisa and the team, our strategy, our technology product portfolio, and our focus on the execution, which is driving tremendous momentum across all our entire business.
Now it's time to connect the dots to your financial model, and in particular, how all of this positions us to address very large market opportunity ahead of us and our long-term financial model. Fundamentally, as a management team, we are here to build a compounding business model to drive revenue growth and significant earnings per share expansion to create long-term value. So when you look at our financial framework for value creation, it has been consistently anchored on three core elements. First, it's about accelerating top-line revenue, targeting the most attractive market opportunities, driving technology and product leadership. Second, it's about delivering compelling profitability, expanding margins, and driving operating leverage. And the third, which is most important, as all of you know, is about disciplined capital allocation, not only to fund the growth for the future, but maximize returns for our shareholders and owners.
So by laser-focusing on these top three priorities, this management team has demonstrated a phenomenal track record of execution and value creation. I'll do a quick recap of the last decade, how this team has executed. First, on revenue. If you look at 2016, the company's revenue was only $4 billion. We have been able to drive significant revenue expansion to expected $34 billion with a CAGR of 26%. On profitability, what our team has done is compound the fast revenue growth pace by expanding gross margin. In 2016, the company's gross margin was barely 31%. And we expanded gross margin to 45% in 2020, and we expect to deliver a gross margin of 54% this year. And of course, the gross profit expanded much faster than top-line revenue growth at 34% CAGR. This team also translated the operating leverage into our overall operating profit increase.
If you look at the operating profit in 2016, it was barely $43 million, and this year, we're expected to generate over $8 billion operating profit, which is 79% CAGR, so it has been tremendous how this team has executed financially. On capital allocation, Lisa talked about how we have invested over $100 billion through organic investment and acquisitions, and how those investments have positioned us for acceleration of our technology product leadership and also market leadership. This team is very, very focused on return on investment. On organic investment, first, we prioritize R&D investment, so if you look at our operating expense and CapEx, the vast majority of investment is on R&D. We have a very rigorous internal resource allocation process each year. We examine the market opportunities, the assumptions we made on different investments. Every project gets examined, and every investment gets examined.
The team is not afraid of investing ahead of the curve. You heard from Lisa, from Mark, how we establish innovation and long-term leadership. But it's also the case I joined three years ago. I'm still impressed and amazed how the team's discipline has been. If the assumptions we made were wrong, and if the project would not generate returns we estimated, they are also not afraid of killing the project, really allocate the resources to the most important project, which generates the highest return. And over time, Mark talked about how we built the foundation for the future technology leadership and how we shift resources to allocate to data center and AI. So on the acquisition side, we apply the same discipline. What we have done is not only focus on strategic acquisitions. They have to generate significant returns versus our alternative capital use, like organic investment or do buyback.
That is really important for the team. A very good example is the recent acquisition of ZT Systems. We acquired a ZT design team. At the same time, we separated the ZT manufacturing operation and divested that business. So through both acquisition and the divestiture, not only we achieved our strategic objective, like Forrest said, accelerate time to market significantly for our MI450 generation product. More importantly, we generate significant cash for our shareholders through the divestiture. So Lisa mentioned that we have built a tremendous execution machine and a playbook on acquisitions and the integration. So we can really deliver returns for our owners. When you look at the discipline and the capital allocation the team has done, it really drives significant business transformation and acceleration of our financial momentum.
Lisa and Forrest talked about how we really transformed the business to data center, like $2 billion in 2020. Now it's over $16 billion, almost half our business. For CFO, what's really most exciting, it has a higher gross margin, so structurally, we are focusing on the highest growth opportunity and the high gross margin, and then on financial momentum, we have accelerated our financial momentum. If you look at 2025, we expand the revenue growth by 32% based on our estimate for our Q4 guidance, and both client and gaming business and data center contribute to this increase in revenue. On gross profit, we expand the gross margin to 54%, and the second half of this year, we're ramping MI350 significantly, but we are able to continue to expand the gross margin. Primarily, it's driven by the newest generation product we're introducing across Ryzen, EPYC, and the Instinct family.
It's very exciting how we continue to drive the richer product mix across all our business. On operating income, we are leaning in, investing aggressively this year. We continue to deliver over 33% of operating income. It's a very powerful business model. We are pleased with our financial performance in 2025. Building on this strong financial foundation, I want to talk about the opportunities in the future and our future long-term target model. Of course, it all started with opportunities. I think this team has invested during the last decade to position ourselves to be able to capitalize on the very large opportunities Lisa highlighted. The data center opportunity, we expect to go from $200 billion this year to over $1 trillion in 2030. Within this, really, given this very large market opportunities, we do see a significant step up of our growth trajectory.
If you look at the last five years, we were able to deliver a very impressive 21% top-line revenue growth, which is driven by significant data center expansion at 52%, and our core business, which includes client gaming and embedded business, including Xilinx, is growing 10%. Looking ahead, as Lisa and the team talked about this, we expect our data center business to expand more than 60%; then talk about significant server CPU expansion driven by AI adoption and how we can target the market share gain to more than 50%. And Forrest Norrod talked about the inflection of our data center AI business, which we expect MI450 to start to ramp in the second half of 2026. And we see a trajectory to tens of billions of dollars of data center AI revenue in 2027.
Of course, a trajectory Forrest talked about it to be more than 80% CAGR for our data center AI business. Coming to our client gaming business and embedded business, they are vital and diversifying elements of our business model. Jack talked about the client business, the tremendous momentum of ASP increase and the market share gain will continue to drive that share gain to more than 40% of the market. Savio just talked about the significant embedded business design wins and the market recovery, which will outpace the market growth going forward. When you combine that together, we expect the overall company to expand more than 35% in top-line revenue growth. If you look at our long-term financial model, we expect top-line revenue to grow more than 35% during the next three to five years.
We expect our gross margin to be in the range of 55%-58%. And we expect our operating margin to be more than 35%, the tax rate of 13%-15%, and the free cash flow margin more than 25%. So as a management team, we are very excited about building on this business momentum we have today and execute toward this long-term financial model. Of course, I'm going to double-click on two areas I know all of you are going to ask me about: gross margin and operating leverage. First, on gross margin, we do have multiple drivers we can continue to expand our gross margin. First is scale. It's classic operating leverage. You guys all know our team does a great job to leverage scale volume to drive gross margin expansion. Second, optimization, which is about yield, test time, improvement, design for cost.
Those are really important for AMD because we actually have a broad product portfolio, and we target all different end markets and with very high volume. Some of our product lasts for a long time. So when you make the yield and test time improvement, you actually can make a significant impact on gross margin over time. Our operation team has done an excellent job there. I'm very confident they will continue to do so. Third is mix, which remains to be the primary driver of our gross margin. I'm really excited about several tailwinds we have. First, on the client side, Jack talked about how the ASP increase in moving up the stack has helped us on the revenue share. Frankly, it also helped us with our gross margin expansion.
More importantly, we do think we have an upside to continue to expand the client gross margin because right now it's still very much lower than corporate average. But we do think the opportunity to expand gross margin going forward is going to help the company generate a tremendous bottom line and cash flow. Secondly, on the mix side, then talk about the server business, continued momentum, and the market share gain. More importantly, we are at the inflection point to gain more share on the enterprise market side. Every point of enterprise market share gain is significantly accretive to our overall gross margin. And Salil talked about the opportunity to outpace the market growth with really high margin business, especially AMD, Xilinx, FPGA business. It's very strong in aerospace, defense, industrial, testing, emulation. We see significant gross margin expansion and growth from those businesses.
With those really tailwinds, I feel really good about continued gross margin expansion. Then let's talk about data center AI business. First, Forrest mentioned earlier, we are not selling the rack-level system solutions. Our business model is not going to change. We are selling GPUs, CPUs, sometimes DPUs. It continues to be the same gross margin structure like we have today. Today, it is true our gross margin is slightly below corporate average because our objective right now is really to scale our business to make sure our customers get a better TCO and also getting more market share. In a fast-growing market, maximizing gross margin dollars is our number one priority. Over time, when we continue to add more capabilities each generation, we also can optimize our designs for workload. With the scale, we can optimize manufacturing business and operations.
We do see our gross margin to continue to steadily improve over time. So when you put it all together, there are puts and the takes. That's why we are guiding a range of gross margin from 55%-58%. The mixed dynamics certainly determine that, including the pace and the rate of our data center AR business ramp. But overall, we feel really good about the trajectory to continue to expand the gross margin going forward. Then operating leverage. 2025 is an important year for us to invest. Given the large opportunities we have, the team is leaning in to accelerate not only the hardware roadmap, but system software investment. We also did acquisition for ZT Systems and several software acquisitions to add our capabilities to address this large market opportunities. But we are very committed to drive revenue growth to be faster than operating expense growth.
On R&D side, we'll continue to set it as a priority, but we are driving AI adoption across all our engineering team to improve productivity of the company. On SGA side, we'll continue to tightly manage operating expense and also drive automation and AI adoption to drive operating leverage to drop to the bottom line. So when you look at our long-term model, combine the 35% top-line revenue growth with more than 35% operating margin, it's a very powerful combination. And we do expect to drive our earnings per share to be more than $20 in the forecasted period. It's very exciting. And the whole team wants to execute on this model to deliver just like the track record this team has showed you in the past. Now let me switch to capital structure and capital allocation. Our business model generates a very strong free cash flow.
If you look at 2025, we expect to more than double our free cash flow versus last year, and we have a pristine balance sheet and very strong investment grade. We do view that as a strategic asset. It's a tool in our toolbox we can use to continue to drive a company's growth trajectory, and we are also committed to return cash to shareholders. Since 2021, we have returned $8.6 billion cash to our shareholders through buyback, so our capital allocation principle continues to be the same. First and foremost is investing, especially when you look at the large market opportunities, hyperscale, organic investment, and acquisitions to really enhance our capability to continue to deliver returns to our shareholders, and from a shareholder return perspective, we'll continue to commit to do the buyback to return cash to shareholders.
Of course, offset the employee stock dilution first, but we'll do additional buyback when opportunities arise. Overall, we'll continue to maintain our strong balance sheet and financial flexibility to continue to invest for the future. So in summary, it's a very exciting time to be part of AMD. We are targeting over $1 trillion market opportunity, and we are driving top-line revenue growth more than 35% and operating margin more than 35%. And our discipline, the capital allocation approach, will allow us to continue to compound earnings per share to drive earnings per share to be more than $20 during our forecasted period. With that, I'll invite Lisa to come back to give closing remarks.
Okay, well, that was a lot of information for one afternoon, so I promised to keep the closing remarks very, very short. But maybe I'll just give you a little bit of perspective.
I mean, I've been CEO of AMD for a little over 11 years. I'm super proud of the team that we have. And I think the key message is we've always been really clear about what our priorities are. It's about delivering great products and great technologies to the market. It's about having very, very deep and strategic customer relationships. And I can say for sure that there has never been a more exciting time to be at AMD. If you think about the incredible market opportunity that's out there, if you think about the product portfolio that we have, I think we know how to execute, and we know how to execute at scale, frankly. It's quite different running a $4 billion company to running a $34 billion company.
And the only way you do that is with a fantastic team that really knows how to optimize every aspect of the business. And we also have, I think, a history of very strategic and deep partnerships. I think what you're finding right now in the market is the market is favoring those who can partner, those who can bring people together. Because at the end of the day, I'm a firm believer in there is no one company that has every solution that's needed in the market. Frankly, what we love is building those partnerships that are one plus one is much greater than three in the sense that we bring to our customers something that they could never get on their own, and they're bringing us insight that we would not get on our own. And that's how we're putting this all together.
And the other part is to ensure that for all of you, as our shareholders and our supporters, we're also committed to having a very exciting market return for you. And I think Jean said it best, so we'll follow up with her numbers. We've shown you a lot of numbers today, but perhaps the three most important ones are best-in-class growth with greater than 35% revenue CAGR, greater than 35% operating margin, and greater than $20 EPS. So with that, thank you again for spending the last few hours with us. We are super happy to be able to spend some time to talk about the long-term view from AMD. And I think we're going to transition into Q&A at this point. Thank you.
Thank you, everybody. Give us a few minutes to do a little scenery change. Actually, I'm going to stick this in my pocket.
While we're waiting for the guys to set up the chairs and for the team to come up, just a couple of housekeeping items and sort of rules of the road on Q&A. No 11-part questions, please. So we're going to Liz and Prabh from our IR team are going to be running around with roaming mics, and we'll kind of call on you guys to ask questions. We'd really appreciate it if you kept it to one question per caller. And then while the team is sitting down, after we get done with Q&A, there's going to be a cocktail reception over here, I guess to my left, to your right, all the way around the corner there. There's a bunch of different demos of technologies from across AMD's businesses, including a very interactive, fun thing about the Helios rack.
That's a big touchscreen display thing that's actually really cool. So as we wrap up, after a day of talks, everybody needs a cocktail. Plus, there's some great demos in the back. So thank you very much for all of your attention today on us presenting the business. I've been at the company less than a year, and I've worked with all of these folks for a very long time in my external role. And it's just incredible what the team's executing against. So if you guys don't mind sticking your hands up for questions, maybe we'll start from there. Since Tim was looking over my shoulder the entire time during the presentation and peer pressuring me, Tim, go ahead.
I just had a quick two-part question. So sorry. So Lisa, first on customer concentration.
So we see the big deal with OpenAI, and there's some overlap with Oracle as well. So how do you think about in this timeframe, how do you think about customer concentration in the data center GPU business? And then quickly on component cost increases, Jean, how did you factor that into your model? It seems like you don't buy a lot of these components, but they could really hurt the PC and the server TAM. So I'm just curious your thoughts there.
Thanks. Sure, Tim. Thanks for the question. And look, the way I would say it is the following. We are super excited with OpenAI and the strategic partnership we announced. I think it sets a foundation, given its size and scale, 6 gigawatt over four or five years. I think that is a key foundation.
But the way you should think about it is our goal is to have really a very broad set of customers. And there are multiple customers that we are engaged with right now at similar scale, multi-gigawatt scale, multi-generation, with the notion of, I mean, MI450 is an extremely competitive solution. I mean, I think I don't want to sort of under-describe sort of the big leap there. We've done a great job with the MI300, MI325, MI350 family, but MI450 opens up a lot more TAM for us. And I think with the work that we've done on both the hardware and the software platform, we've been deeply engaged with a number of customers all throughout this period. I would say even since the OpenAI announcement, it's really opened up some additional opportunities with the notion of, look, and I said it, there's an insatiable demand for AI compute.
MI450 is an excellent solution. I think that's the really anchor point for the type of growth that we're talking about. Yes, OpenAI is an important foundation, but we will have multiple hyperscalers in the same timeframe, multiple customers at gigawatt scale, and multiple customers that are looking at multi-generational roadmaps with us. I think as we go forward, we even get even more tailored to some of these workloads. It's like every time we engage at this type of scale, we learn, and we get even more tailored to the solutions. I think you saw that in Dan's roadmap when he talked about what we did in EPYC. I think we're seeing a very similar trajectory with just the depth of the partnerships with the broad customer set. On the component cost question, it's a great question. As Lisa mentioned, it's very complex.
But Forrester and our operations team have been working very hard to ensure not only we have the supply component, that we know exactly the cost of the component. So we have been very disciplined about that to ensure we can support our customers. Right. Vivek here in the front. Thank you.
Thank you. Vivek Arya from Bank of America Securities. Thank you so much for an informative analyst day. So Lisa, I'm trying to understand how you came up with the 80% growth rate forecast for AI. Is it bottoms up? Is it top down? Because if I look at what it mathematically means, it's about $120 billion-ish, right, plus or minus by 2030. So if I take that $1 trillion dollar TAM, it suggests that your share aspirations are somewhere in the mid-teens. But how much of that number is ASICs? How much of that is China?
So what is truly what I'm really trying to get to is what are really your market share aspirations? And this 80% growth rate, is this built up some from bottoms up visibility, discussions with customers, or is it based on TAM and then a market share aspiration on that?
Yeah, absolutely, Vivek. So many aspects to that question. Let me try to get through it. Look, when we look at TAM as well as revenue projections, I think we do it, first of all, in the near term, it's bottoms up. So bottoms up to look at what exactly are our engagements with customers, especially when we're talking about things like 2026 and 2027. Frankly, the lead times for wafers, for memory, for components is so long that we need to have that very detailed conversation many quarters out with our customers.
And then as we get into the medium term, it's more strategic conversations with customers. We don't do much from a true top-down perspective because, again, that's sort of a little bit math. But the way to think about sort of what are our thoughts on the TAM, the TAM, we believe in this strategic timeframe, that TAM includes GPUs, it includes ASICs and accelerators. And our view of that has not really changed. I think we've been consistent. I think there's a place for ASICs in the accelerator TAM. ASICs tend to be good if your algorithms are a bit more stable and if you know exactly what's going to happen next. GPUs, we're innovating at such a fast pace, at the annual pace, with all the new data formats, with all of the new algorithmic things that are coming out in the models.
I mean, GPUs, we believe, will still be the predominant percentage of the TAM, so ASICs, maybe 20%-25% of the TAM and the rest of that being GPUs. And then in terms of how do we think about China, I think China is a complex and dynamic situation, certainly as it relates to us being able to service China. As we said in our latest earnings call, we've taken China out of our revenue forecasts because it's too hard to call right now. We'd still like to sell to China, no question, but relative to the TAM, it's also a small piece of our TAM. Again, that's the best visibility that we have at this point in time, and we'll continue to update it as we go forward. Hopefully, I covered your questions.
As someone who spent the majority of his teenage years sitting in the back of the class, Joe, I see you way back there. So don't let that last row get left out.
We did not know where Matt was going with that. So we didn't know, you didn't know, but.
Yeah, Joe Moore and Morgan Stanley, thank you. I guess on the same lines, how do you think about market share five years out? I mean, either you're delivering better ROI than NVIDIA is or you're not. What's the case for low teens type of share versus much larger share or much smaller share depending on kind of how you deliver technology-wise?
Yeah, Joe, let me comment on that, and that was part of Vivek's question.
Look, I think the key thing for us and for the market is no one enters a market to be a very small share, right? So we enter a market thinking that we can be a meaningful portion of the market, that we're delivering value. That value is coming in sort of technology capability. We are doing things in different ways. We're offering products that have really leadership and certain capabilities. And we're also offering overall total cost of ownership. All of those things are important. So certainly from a market share aspiration, we mean to be a meaningful portion of the market, meaningful double-digit percentage of the market. Now, the market changes, all kinds of things change. But our view is that there is no other silicon TAM that is as exciting as the data center TAM and especially the data center AI TAM.
And so we have gone all in in terms of strategic investments in this area. And I think we have a very, very strong roadmap. I mean, you heard sort of the very early pieces from Forrest and Vamsi about what we're doing. MI450 was a significant step up. MI500 series is another very significant step up. And with that, we see opportunity to really differentiate in this market, especially as there's really just a diverse set of workloads. Like what you need for the highest-end training may not be the same thing that you need for your best throughput in inference. And actually, I can say for sure, it's probably not the same. And as a result, there are ways that you can tailor products such that you will get best total cost of ownership.
And when you're investing CapEx at this scale, I mean, it is absolutely worth it for the largest hyperscalers to find the most optimized solutions. And that is, I think what we see is the conversation is much more workload specific today than do you have a GPU that works.
Sticking with the back row theme, Stacey, go ahead.
Thanks. Stacey Rasgon at Bernstein. So you've talked about tens of billions of dollars of AI revenue in 2027. You do, I don't know, $6.5 billion or so this year. If I just took that 80%, that would put you at around $20 billion or $21 billion in 2027, which I guess is tens. Now, maybe it's possible that the growth rate should be faster in the early years than the later years, just given the base is smaller.
I think the Street's modeling, I don't know, $20 billion, $29 billion, or $30 billion in 2027. How do you feel about your, I guess, that medium-term trajectory relative to where those kind of current expectations are?
Yeah, I think, Stacey, the way I would say it is since we're giving a three to five-year TAM and the outer years have a little bit less visibility than the near-term years, we would expect the near-term years to grow faster than 80%, so greater than 80%. And I think we are comfortable with tens of billions of dollars being in the zip code of, let's call it where some of the thoughts are overall. Maybe do you want to add to that? I think you covered it. I think when we look at the current consensus for 2025, it's roughly in line.
We do expect, like Lisa said, to grow faster.
Josh over here on the right, my right.
Thank you for taking my question. And thanks for hosting such an informative day. I think, Lisa, in your prepared remarks today and on the earnings call last week, you called out multiple gigawatt scale projects that are in development. What hurdles need to be cleared to convert those, both on your side and the customer side, and any context or scope you're able to provide on what those look like compared to the OpenAI engagement you were able to give us a lot of details on publicly?
Thank you. Yeah. Josh, thanks for the question. I think the best way to say it is we have very deep relationships with the hyperscalers and all of the hyperscalers and many of the top AI native companies.
You can imagine that everyone is super interested in MI450. And we expect that MI450 will be deployed at very significant scale. In terms of multiple gigawatt customers, my comment is really around just the scale of forecasts that we're getting from customers on their near-term needs. So there is a desire for a significant amount of compute. We are working with the supply chain today to make sure that we have the broad ability to support all the compute that's required. And just to make sure that we have that out there, I think we have built a supply chain for the type of growth rates that we're talking about. And I think the customer momentum is very strong. So much of the work right now is continuing to work with Vamsi on some of the software aspects of it. But it's really an execution mode for MI450.
As we get those fully ramped up and validated, we would expect to convert a number of those opportunities.
All right. I'm going to shift gears and go to the front row. Oh, Liz, I can do it. Don't worry. You're going.
Yeah, thank you very much. Aaron Rakers with Wells Fargo. I want to shift gears a little bit. The embedded presentation, there were several references to semi-custom opportunities, particularly around the data center side. So I'm curious of what exactly is that? Are you going to participate in some, if I call it XPUs or XPU attach opportunities? When do you see those maybe materializing? And Forrest Norrod, real quickly, I'm curious on silicon photonics. Nodemi, I think, was an acquisition you made. Where does that stand in kind of the strategy from a rack scale perspective as well? Thank you.
Yeah, sure, sure.
Aaron, maybe I'll start and then I'll let Forrest talk about optics. Look, semi-custom has been in our strategic sort of tool chest for the last number of years. I really believe in this sort of business model. It's actually not a product. It's actually a capability to take our IP and really tailor it for specific applications. I think Salil has done a particularly good job at taking all of the semi-custom components in our technology IP to some of his top embedded customers. And they've seen a lot of value in that, and so you saw in their aerospace and defense customers, automotive, a number of communications sockets as well, where there isn't something that's very easy to pick off the shelf, but we have all the pieces for them to put it together.
To your specific conversation about data center, we have won several data center semi-custom opportunities. They tend to be more not the compute unit itself, but some of the attach around the entire system, including some networking components and some other specific components. I think if you were to fast forward and talk about how we could imagine semi-custom evolving, I think you can imagine semi-custom as built off of our GPU compute capability. So the idea that the beauty of chiplets is you really can customize pretty easily, right? You could take our entire foundation and say, hey, let me take this compute die off sort of the AMD standard product and put on a customer-specific standard product. And I don't know what you call that. Do you call that a GPU, or do you call that an XPU, or do you call that a GPU plus?
But at the end of the day, what we're trying to do is provide workload-specific optimization for the highest volume sockets. As I said to Vivek's question, I still believe that standard product GPU will be the largest driver of growth just given the rate and pace of innovation that's happening there. But there may be opportunities for us to do customized GPUs as well if there are specific workloads or use cases. I think it just broadens our overall capability in the data center space and really across the computing space. Maybe Forrest on the optics.
Yeah. And Aaron, I think on the optics side, for the last 20 years, I'd say that silicon photonics and optical interconnect within the rack has been a technology that's always three to five years out. And it's been that way for the last 20 years.
And the ingenuity of the engineers in getting copper-based solutions to continue to scale has been incredible and better than expected. And so that's pushed that horizon of the age of optics out continuously. I think as Mark mentioned earlier, we were approaching the point, though, where it's pretty clear that just the age of optics is almost upon us and that we see that time horizon that's indefinitely been out there is collapsed. And so, we believe in the 2027, 2028, 2029 time frame, you're going to see a transition first in the large-scale rack-level systems for scale-up fabrics and then over time other places as well. You're going to see a transition to optics. And just from an I/O density point of view, from a bandwidth point of view, from a power point of view, it just makes all the sense in the world.
We're certainly getting fit to fight. You mentioned NOSEMI. There was a small acquisition to augment our already existing optics team. But as Mark said, we feel in 2027 and beyond, it's an era where optics and SerDes will continue to coexist for a short period of time, but it's all going optics long term. I don't know if you wanted to add anything to that, Mark.
No, well said.
All right. I don't know what it is about sell side either in the front row or the back row, but Chris Roland in the back there.
Thank you so much. Chris Roland, Susquehanna. Mine is for Jean. Jean. If I'm doing the numbers right, it seems like maybe OpEx growth is, call it mid-20s versus revenue growth at 35. I guess, first of all, should we front-end load that? Could it even be higher than 25?
And then, secondly, the Street is dialed in for high teens% for next year, mid- to high-teens%. Does that mean the direction of travel is higher and should we adjust our models?
Yeah, Chris, thanks for the question. The way to think about it is 2025. It's really a year we're investing, leaning in, investing significantly more. So if you look at the OpEx increase this year, it's actually pretty high. But going into next year and going forward, we do believe we can drive revenue growth to be higher than OpEx growth. So year over year, I would not suggest you to front-load all. It's literally from a modeling perspective, what's the best way to model it? You can just model it that way. So I don't have a clear path, right? We're doing our 2025 annual operating plan right now.
So it is, as I said, a rigorous planning process to figure out where we're going to invest. But one thing I can tell you is the revenue growth next year will be higher than OpEx growth.
All right. I think I can't see Brett, but I know he's behind that very large pole, so we'll go back there. Thank you.
Yeah, thanks. I'm hiding here in the background. It's Brett Simpson at Arete. And thanks again for hosting an informative day. Lisa, my question is about supply in the industry. And you put out a large CAGR for data center. And the TAM in 2030 translates to a huge amount of fabs that need to be built for the industry. So my question is, I guess DRAM looks like it's going to get really tight into next year. Foundry, at leading edge, is extremely tight as well.
So can you maybe talk about what AMD is doing with their key suppliers in terms of getting aligned with how you see the growth trajectory in AI?
Thank you.
Yeah, thanks for the question, Brett. So look, we have been actively planning the supply chain all along. So certainly for the last few years and going into next year and the following years, I think the fact of the matter is I think we have a lot of experience in sort of being first on some of these things. We're very early in the advanced nodes. So planning is very important. We're very early on high bandwidth memory. So planning is super important on that. And the best thing I can say is for sure the supply environment is getting tighter.
But I can say that we feel very confident that we have the overall supply chain capability to support the growth model that we're talking about. And the key about the industry in general is we've generally been able to really get supply to ramp when it's clear where the demand is. And as everyone's putting out their markers right now in terms of what they think the overall demand is, we're working very closely with our top suppliers to actually triangulate all of that and ensure that we have really our fair share of the growth that's needed to support all of this.
Can you go with fine, sir?
All right. Thanks. Ben Reitzes is from Melius. Thanks for having us and appreciate it. I wanted to readdress the first question with regard to OpenAI and say something that's on everybody's mind.
There's obviously this worry about OpenAI that's just with the customer concentration, the CapEx concentration in the industry. People are particularly worried about it now. I'm sure this isn't news to you. You just put out these really great forecasts that are awesome and congrats. Did anything happen that makes you just more confident about it, maybe more line of sight on the six gigawatts? Anything you can say to comfort a crew that just on the customer concentration question that gives you more conviction in these great forecasts, considering they can be $20 billion plus in revenue in a given year?
Yeah. So much. So I think the best way to think about it, Ben, is the following. First of all, I will say OpenAI is definitely one of the most aggressive, the most aggressive when it comes to their compute forecasts. So that is definitely true.
I think the way we've structured our engagement is it's a very disciplined engagement. I mean, as much as we like the headline numbers of six gigawatts over the next four or five years, it's a very disciplined engagement on what do you need each year and where are you going to get it from? And the expectation is that we've already said the first gigawatt will start in the second half of 2026 and will go into 2027. It is a very disciplined way of saying, hey, and we expect it to be installed in these data centers at this point in time. This is when you have to have your supply ready. And that's the only way that we make these types of things happen.
This was really what was so important because it is such an impactful deal, is to ensure that we put all of those measures in place in dealing with OpenAI as a customer. That being the case, I will say that we have a broad base of customers that we serve in the data center. The comment or the question earlier about customer concentration in the data center AI business, we expect to have multiple similar-sized customers in the strategic time frame, in the MI450 time frame. That is how much interest there is in MI450. That means that we're dimensioning the supply chain to supply multiple gigawatt scale customers. We're working with our partners to do that. I think the best way to say it is we are quite disciplined in how we plan these things.
I mean, it's significant scale and significant volume, but we're also quite comfortable that we know how to do it.
All right. We're going to do a little ping-ponging here. Antoine, I think you're way over on this side. And then we're going to go to Will, who's way over on that side, just for fun.
Actually, maybe if I want to ask one on the edge AI opportunity. So you've been mentioned, I mean, there's been a lot of chatter around a lot of compute from content delivery networks, edge telco. I'm just wondering what use cases AMD is seeing at the edge.
Maybe Jack and Cecilio, do you want to come on up and say a couple of things?
Where are you seeing use cases in the 500 billion parameter model? If you think about the latency from the cloud to the endpoint, we're seeing opportunities in hospitals.
In the real world, basically things like autonomous driving, right? And it's drones, robotics, and things like that. But it's really cutting across all industries, I would say, right? Almost every application, every algorithm is being rethought these days. And people are thinking about how do we apply AI instead of the traditional algorithms, right? You probably in the past, you heard of OpenCV and vision algorithms. None of them exist anymore, right? We have all the AI algorithms, right? So in some ways, right, every device, every application in the physical world will have some AI infused in them, which is why the TAM is super big in the silicon TAM for physical AI.
All right. Will,
thanks for taking my question. It's Will Stein from Truist Securities.
Lisa, I thank you for hosting such an informative day with these very nice aspirational TAMs and margins and all the earnings growth. It's wonderful, and I appreciate your prior comments about the diligence in building up this forecast, but there's two constraints that people ask me about a lot that seem a little bit further away from AMD's control than what we typically see. One is your customer's ability to fund all this, and the other is I think we'd really appreciate some clarification as to why you believe that this is going to be able to be funded and powered. And should we expect other AI?
Thank you. Yeah. So a couple of different questions there. So to answer your question on how much time do we spend on, let's call it, we spend a lot of time on power.
We're spending quite a bit of time looking at the power roadmap, not just for our customers, but just around the world. Where do we see sources of power? We're talking a lot about the U.S., but you should think that there are significant ramps in power outside of the U.S., especially when you look at places like the Middle East, Southeast Asia, a bunch of other places, so we're looking at power roadmaps there. We are working actively with each of our large customers on their power roadmap to ensure that we know when their data centers are ready, so that's how we're matching up the forecast, and then the second point, as it relates to how do we think about financing, certainly financing is a key piece of it.
Maybe if we just maybe put OpenAI aside for one minute because I think that might be a bit of a topical conversation. All of the other large hyperscalers who are talking about raising their forecasts are extremely well funded. I mean, their balance sheets are really strong. And the fact that they are choosing to invest more in AI should be a good indicator to the audience that they see value in it. I can tell you the same customers that I might have talked to 12 months ago or 18 months ago maybe didn't say that they were going, they weren't forecasting that their CapEx would just keep going up. What they've seen is they've seen real value in their business and in their sort of overall strategy to do that.
So I think we should feel confident that the hyperscalers can afford to invest the type of CapEx that we're talking about. And it's because they expect to see the return on the other end. And now coming back to OpenAI and addressing that, there's no question that OpenAI is leading the market in terms of forecasting what they want in terms of compute. I think, again, not to speak for them, but if you take a look at some of their numbers and the number of the growth of their user base, the growth of their ARR, and just the amount of users that still don't have as much compute as you want, I wouldn't bet against that. I really wouldn't. What we must do as sort of good stewards of the company are ensure that everything is cause and effect.
I think that's why when you look at the structure of our deal with OpenAI, it's a very much aligned incentive structure in the sense as they see visibility for obviously funding and installing all of that GPU capability or AI capability. We also plan together with them. And the reason that we are so forward-leaning on this is it is great for us in terms of just the amount of learning that we get from engaging at gigawatt scale with a customer that's on the bleeding edge of foundational models. I mean, that is, I would say, Vamsi's team has had a lot of torture, but also a lot of fun in the process. So I think we're doing this in a very structured way. We absolutely think about all of the aspects that you're talking about.
And when we look at all of the above, we say, this is a very unique moment in AI. And we shouldn't be short-sighted in thinking about, hey, are you going to see returns in a couple of quarters? Or are people going to be interested in financing it? If the AI usage grows as much as we expect, I think there's going to be plenty of financing for all of the return and capabilities that are there.
Blaine.
Hey, thanks, Blaine Curtis. Jefferies. Maybe an AI Jason question. I wanted to ask in general purpose servers. You showed the market accelerating. I didn't know how literally to take the 60% overall data center and then 80% AI. It would get you a pretty substantial double-digit server number. So maybe how are you thinking about the market and then your growth on top of it?
Yeah.
You want to take that, Jeff? You can take it.
Look, hey, Blaine, thanks for the question. So on server, look, I showed the TAM for AI. The general purpose TAM is also double-digit, right? So we believe that there's a lot of growth potential there. And with our roadmap, we believe that we could definitely far outgrow the market as we have been and continue to gain share. I'm not sure if that got to your.
Maybe let me add something to that, Blaine. If what you're asking is what are we thinking about server TAM and. Yeah. So server TAM, I think Dan showed something like high teens, high teens TAM growth rate. The way to think about that is, though, again, back to the question of how do we get to those numbers.
In the near term, we have very clear forecasts from our customers for 2026, 2027. The uplift in sort of cloud demand has actually surprised us, and we saw the first aspects of it a couple of quarters ago. It was maybe one customer who we thought, well, maybe they're building ahead or they're refreshing, and now we've seen it at multiple hyperscalers, and when we ask underneath it what's happening, it is, as Dan described, there are multiple vectors. As you have more AI inferencing that's being done, you need more general-purpose compute. As you have agents really start kicking in, it's like having 1,000 more people. They have to be computing on something, and they need general-purpose compute to compute there. This feels like a real durable trend, and I think that's really nice because there was this debate.
I mean, frankly, for the last couple of years, we've showed the TAM was relatively flat, right? There wasn't a lot going on. There was a little bit of, I think people were holding off on some of their refreshes as they were trying to figure out how much to invest in AI. And now they've realized that, yes, I have to invest in AI, but yes, I have to invest in general-purpose compute as well. And it plays really well into our roadmap, just given the strength of our overall roadmap.
Liz, maybe you could come to Gary, and then Lou will be over here.
Thanks for taking the question. Gary Mobley at Loop Capital. I had a question about ARM-based compute, not as a competitive threat, but as an opportunity.
Now, I understand that you use the N-series cores in Pensando and whatever you do in Xilinx as well. But what about licensing or creating V-series ARM-based processors for head-end compute or even leaping into the custom compute opportunity?
I'm going to talk a little bit about where we use ARM, and I can add.
Yeah. So ARM is a partner for us in a number of different products. As you mentioned, we use some N-cores in the Pensando technology. We use some of the semi-custom products as well. And we expect that to continue. I mean, they've got a strong roadmap, and there's some applications for which that's a good fit. Certainly, beyond that, if there's semi-custom opportunities where customers particularly want ARM as a compute engine, we are more than happy to do so.
And we've invested in having our foundational infrastructure be able to support either x86 or ARM compute elements. So the Infinity Fabric that Mark talks about all the time, that plumbing is set up to give us diversity. So it really will depend on customer choice. But I would be remiss if I didn't also say that for Dan's roadmap, we think that we are certainly competing for TCO and performance advantage, not just with Intel, but also with ARM. And we really like his roadmap. We think that it offers outstanding performance, outstanding TCO value at multiple different points. It's one of the great things about having both the performance and frequency optimized cores as well as the cost and area optimized cores coming from the same technology tree. It gives us very broad coverage of pretty much every space that's pertinent in the data center.
So I really like Dan's roadmap against all comers.
Okay. Thank you. Lou Miscioscia, Daiwa Capital Markets America. So the comments before about edge AI PC, this is very interesting with potential great growth. I realize writing applications on the enterprise or the consumer side takes a lot of time. But when do you think we'll actually see an inflection point there? And also, given if I saw the charts correctly, you talked about growth and share gain, but you didn't give a TAM. So anything that you could fill in there would be very helpful.
Jeff, do you want to take that?
First, on the client TAM, we're projecting low- to mid-single-digit in the next three to five years. Then on edge AI, the beauty of it is we have the same ROCm software stack that goes from Instinct to edge proposed.
We make it very seamless for our developers, and we're just getting started, so we have our first Ryzen AI PRO that we just launched a few weeks ago, and we're going to build up this roadmap to be AI first versus gaming first, and you'll see future products that we haven't announced yet in upcoming updates.
CJ, I see you in the back, and you were kind enough to wear a tie. Well done, you. Coming behind you. Back to you.
I figure if I'm sitting in the back row, I should probably dress nice, but thank you for today. Appreciate it. CJ Muse with Cantor Fitzgerald, so clarification and then a question. Clarification on the $1 trillion TAM and your target revenues. Are you including HBM in there? And we'd be curious how you're thinking about HBM within your gross margins.
And then to my question, curious around your partnership with OpenAI and their Triton software stack where you try to kind of make heterogeneous hardware environment. Would love to know how that is informing your business with them and how perhaps that's enabling you with other customers. Thanks so much.
Sure, CJ. So clarification on the $1 trillion TAM. So that is a silicon TAM. So that includes GPUs. That includes the HBMs that go with the GPUs. That includes CPUs. And that includes actually the networking part of the market that we service. So let's call it scale-up networking. It doesn't include scale-out and switches. And the way we think about HBM is on the stuff where we're doing the primary design, we should get a healthy margin. I mean, that's where all of our investments are going.
For the things where we're attaching, I think we are adding value in the overall, let's call it package, but we're not expecting the same margin on HBM. And I think that's fair because if you think about just where the value is coming from, we want to ensure that in the end, we have good overall TCO to our customer set. So that's the way I would state how we're thinking about it. And then in terms of OpenAI and the heterogeneous environment, maybe Vamsi, I'll let you talk a little bit about the software environment.
Yep. So maybe first I would say that the work that we have done with Triton is very aligned with our overall strategy to leverage the rising abstractions, right? So Triton is actually a higher-level programming abstraction. So it's more productive and easier to compile to hardware.
So that's why we collaborated with them. It's going well, the Triton enablement. And what it does is when a new workload comes along, it actually takes down the amount of time to get onto hardware. So if you have, let's say, a model that's stitched with, let's say, 100 kernels, maybe there's a few kernels that will remain to be written with a little bit more sort of lower-level programming. But a majority of them can actually be compiled with Triton, and it gets us to market faster. So that's the advantage. And also, the work is not specific to OpenAI. Triton actually is used quite heavily in the industry when a lot of the new DayZero support comes along. Actually, it gets enabled because Triton's backend support for AMD GPUs is becoming more and more adopted.
All right.
Our handy-dandy little clock here, the yellow light just went on. So I think we have time for maybe two more questions. Chris, since you're right here in front of me, I will walk over here. Someone in the front row.
So Chris Caso from Wolfe Research. Jean, a question for you with regard to the gross margins. And one of the things you said is that in the near term, your priority was to maximize gross profit dollars. So how do I reconcile that with the longer-term guidance, 55%-58%? What's the time frame for that 55%-58%? And should we assume, therefore, that the gross margins are diluted somewhat in the early stages of the MI450 ramp?
Yeah. The first thing I would say is if you look at our gross margin, we have been expanding our gross margin in 2025 each quarter.
We are ramping MI350 actually steeply in the second half of 2025. So overall, from our gross margin profile perspective, we absolutely want to make sure we continue to improve gross margin with all the tailwinds I talked about. On the mixed perspective, as you can imagine, it will really depend on the pace and the ramp of the data center AI business, right? So we said that we're going to start to ramp in the second half of 2026. Then into 2027, it's going to ramp significantly. Our other business gross margin will continue to go up. That makes a change in that range, absolutely. If the volume is really high, it could be close to the 55 range. And relatively speaking, if the mix is dynamic differently, we could be on the other high end of the gross margin.
But that is in the forecast period between three to five years.
In the near term, though, is there an early dilution?
The way to think about near term is we are ramping MI350, right? And we'll continue to ramp in the first half of next year. So our gross margin should be quite consistent with what you are seeing right now. We guided the Q4 at 54.5%. I think the second half, when we start to ramp MI450, that is when it's really going to depend on the volume, right? The pace of the ramp and the volume. Right now, actually, I don't know yet.
All right. So Mr. Lipacis, no pressure. You're the last question before we go to cocktails.
Great. Thank you for the excellent presentations. Very informative.
So Lisa, I was actually at the Analyst Day where you guys made the observation that you rounded to 0% share in server CPUs and then made the case that you were going to take share. And I think you upended everybody's expectations, certainly mine. It seems like there's a lot of similarities in how you prosecuted the server CPU market and how you're going after the GPU market in AI. And I was hoping that you could just lay out that playbook, the important elements of the playbook that you use for CPUs and how you're laying that out right now for the GPU. And then on the other side, the competitive environment, you have a different competitor. And I'm hoping that you could just help us understand what's different on the competitive front from a challenges standpoint. Thank you. Sure.
I'll start, and then I'll let Forrest or Mark or Vamsi add if they have things to add. Look, on the CPU side, I think we always started with the notion that this is about technology. And it is when you have sort of disruptions in technology, you can actually make a really big difference if you make the right technology bets. And I think what you saw a little bit in what each of Mark, Dan, and Forrest presented today is it was actually very, very systematic. I mean, I think when we started with our plan, we said, look, it's going to take us three generations to be competitive. However, we know exactly which step we're going to lay out as we went from Naples to Rome to Milan. Post that, it became we were at scale now.
And so when you're at scale, you can actually fan out and address more of the total addressable market. And that's when we went to multiple cores and multiple segment optimization. And now I think we're in a different place. And the different place is when you're now the incumbent. So you have to imagine we don't take any share for granted. So every generation, we have to prove that we are better and better by a lot. But when you're in the incumbent position, you just have such a different seat at the table. And that's where we are today in server CPUs, especially in the cloud. I think we're planning long-term roadmaps, multiple generations. And frankly, I can't say that we get it right all the time.
But our customers are very willing to tell us where we can adjust and where we should adjust so that we can be a very significant piece of that business. And so yeah, we're fortunate that at the same time that the market is inflecting, we have such a strong product portfolio. And if you translate that into the GPU side, it is similar and different in cases. From the standpoint where it's similar, it is foundational. It's all about the technology. We know that. And we've also had a very deliberate path of going from MI250 to MI300 to MI350. We chose not to do rack scale solutions this year because we thought that that would be hard. And we wanted to prioritize time to market of our new data formats and new solutions.
But we chose to set ourselves up so that MI450, we had all of the pieces. That's why we did the ZT acquisition. That's why we did the Pensando acquisition. That's why he's been hiring like crazy on the software side. And then as we go to MI500, there's another set of significant innovations that are coming on board. So I think it is similar from the standpoint that it's very foundational technology. Where it is slightly different is the size of the market and the speed of the market in AI is different than what we saw in the general purpose CPU market.
I actually think that plays in our favor because when you have a market that's moving that fast and when there's so much opportunity to disrupt and innovate, and when one day you're thinking about training systems and the next day you're thinking about fine-tuning and inferencing systems and you need to deploy quickly, it favors differing solutions. I think we've made tremendous progress. We give all of our competition lots and lots of credit because that's who we are. We earn every socket. We feel really good about our trajectory. I'll let you guys add. You can see we're passionate about this topic as a management team.
We are super passionate. I'll just make a quick comment and connect it back with you what you heard Jean in talking about our disciplined capital allocation. We run a very, very disciplined process.
When you look at that pyramid I showed of the value of all the building blocks leading down to our product pillars, that capital allocation is scrubbed many times over to understand what value it's providing. So your analogy to the server roadmap, that expanding roadmap was deliberations year after year, several years in advance for the arc of R&D that you need to decide that differentiation by listening to our customers, build it in, deliver. Same thing with the GPUs. So we have created a very disciplined process as we make those decisions across every business unit with R&D, closing back with Lisa and Gene. And we're going to continue that going forward.
It's not often that I publicly disagree with Lisa, but let me publicly disagree with her on one point. She said it's all about technology. And I would only quibble with the word all.
That is the table stakes that allows you to play at the table, but the other critical element that we learned during the server journey, and that I think is pertinent here as well, is that the other element, because you're talking about a market and applications that are core to our customers, they're very important to our customers. The other element is customer trust, and building that trust where they understand that they can rely on AMD, that they can rely on our technology that we're going to execute for them, it's extremely important. We built that in a very deliberate way on the server side, and I think actually we're doing the same thing here as well.
Phased approach to building with our lead customers, trials with MI250 or MI300, expanding the usage, and things like the validation, the implicit validation from large customers embracing MI450 are very important in reinforcing that lead customers have trust in AMD to provide the core of their AI solutions. And that also is an incredible validation for the rest of the market. So sorry to disagree with you a little bit, Lisa.
That's what a great team is for.
Well said, Forrest. Just a couple of things from me. Thank you to all the executives for coming together here today. You can feel free to give them a round of applause, for sure. But I think it's just as important to talk about the people that didn't come up on stage as a whole other part of the executive team that wasn't presenting today.
There was a hell of a lot of work done by Lisa and her IR team, by Phil and the communications team, by the events team, by everybody at AMD. We're trying really hard to represent the 25,000 engineers and to make them proud, and it takes a lot of work to put on an event like this. All my days on the sell side coming to these things, I had no idea how much work it was. I found that out, and I just want all those people to be recognized as well, so Lisa, the whole team, thank you very much, and let's have a cocktail.