From BofA's Semiconductor, Semi Cap Equipment research team, absolutely delighted to have Dan McNamara, Senior Vice President and General Manager of AMD's server business. We will go through a list of Q&A that I've prepared, but if you have any questions, please feel free to raise your hand, and I'll be sure to get you in. Very warm welcome to you, Dan. Really appreciate you taking some time to chat with us.
Thank you, Vivek. Very, very happy to be here, and thanks, everyone, for joining this session.
Right. Excellent. you know, I guess you are in the thick of it. maybe let's just start with the state of the union in terms of the overall demand environment, cloud versus enterprise. you know, what are you seeing right now versus what the assumptions were, say, at the start of the year?
In the overall data center, what we're seeing is a mixed demand profile. In the cloud, there's definitely some inventory and different optimizations depending on the different cloud provider. In the enterprise, there's clearly still a bit of macro concerns overall and, you know, a lot of cost optimization going on. That's kind of the overall view we have. Where we're focused, though, is to get in these periods, get very focused on our customers and continuously drive with them in terms of what we're doing from a product portfolio standpoint. Our goal is to be the most strategic supplier to the largest data centers in the world, and, you know, we believe we have the product portfolio across a number of different vectors to be that.
You know, that's the current view we have across across the data center.
Got it. What's the mix, you know, kind of between cloud and enterprise now versus what it was last year?
The mix. Lisa and Gene talked about this, I think, from in Q1, that the mix was a little bit higher in Q1 to cloud based on this macro situation for the enterprise. You know, it was in the, you know, 70% range to cloud. What we see going forward is continued not acceleration, but continued growth in terms of Genoa. When we look at what our cloud customers are doing with Genoa, we're pretty excited, right? Because there's expanding internal property usage and then obviously the public third-party infrastructure as a service solutions going out there. We have Genoa in all of the top cloud providers' data centers running right now, and things are going very, very well.
From an enterprise standpoint, here's the other part: with enterprise, there's a tremendous amount of evaluation going on and excitement around the performance and energy efficiency that we're delivering with Genoa. With Milan, if you think about Milan, when we introduced that back in 2021, it was an inflection point for the enterprise. We brought a pretty dramatic IPC uplift and a very nice 25% performance per watt gain, and we started getting a lot of traction with Milan. In Genoa, we've done the same. We brought in more performance, more energy efficiency, and what we're seeing is, across the top Fortune 500, a lot of evaluations and a lot of good plans to move forward with Genoa going forward.
If you have a cost-conscious workload, if you will, there's also the opportunity to use Milan, because Milan is also a better performance than Sapphire Rapids in a large number of workloads. We see this sort of one-two punch in the enterprise, and we believe that, you know, we're very well positioned here going forward.
Got it. When we look at the second half of the year, Dan, there is a very steep expectations ramp and almost 50%, half on half. What needs to happen? First of all, how is the confidence, visibility in achieving that kind of objective? Then, what needs to happen from an... Is it a supply issue? Is it an execution issue? Like, what needs to happen from your perspective to achieve that milestone?
I just talked about Genoa, and that's sort of the point of the spear with, you know, our confidence going forward in the data center. We follow up very quickly, and you're going to hear about this next week with Bergamo, which is a cloud-native optimized device with high density and very good performance per watt and energy efficiency for cloud-native computing. We also are announcing or launching Genoa-X, which is tuned for technical computing, so workloads like computational fluid dynamics and EDA and all sorts of advanced physics modeling. We feel like those two products will get a nice adoption across both cloud and the enterprise. We follow up with that with Siena, which is another yet optimized Zen 4, part of the Zen 4 portfolio.
It's optimized for telco, edge, and storage. That brings very high-density compute at a very low cost and low power package. From a server roadmap through the balance of this year, we're executing and enabling our customers to deploy these three new products in addition to Genoa. Lastly, with the Instinct product line, with MI300, that starts to ramp in Q4, and we see that with supercomputing and some new AI wins that we have. We feel that's why we have confidence.
Right.
in the back half. You know, our customer feedback today on the evaluation of all of these products is very, very strong in terms of the value we're bringing, and again, it's all about by the workload. So for each different workload, we're delivering a different optimization point for whether it's performance or performance per watt or even cost. That's really been the strategy across server. Obviously with the MI300, we're very excited about the pipeline that we have there and the execution pace we're on.
Got it. Just, you know, one thing that has come up and would be, you know, great if you could clarify, just the support for different memory configurations by Genoa. Is there any kind of workload that it is not able to meet, right, versus what you thought when the original specifications were laid out? You know, is there anything in terms of the one DIMM solution versus the two-DIMM solution? I think it'd be just useful to get a clarification there.
Sure. Genoa is completely on track. We believed that it would be a bit slower than Milan. Milan was a drop into Rome, SP3 platform, and this was a pretty big platform change, right? We added PCI Express Gen 5. We added DDR5. The other thing we added was CXL for memory expansion. There's a lot of things happening, not just with the server, but also our customers are doing different things with this next generation of server. We're completely on track right now, and we also believe and have said previously that we will see Milan coexist with Genoa through the course of 2023. From a DDR5 standpoint, we believe that one DIMM per channel covers 98%-ish to 99% of the workloads out there.
We do have a number of platforms out there where two DIMMs per channel also. We don't have any configurations or lack of coverage from a workload standpoint at all. You know, again, we are very, very excited about what both our cloud and the end enterprise customers are saying about what they're seeing from a performance and energy efficiency, and obviously, the overall TCO equation that we're very, very focused on.
Got it. How much of the growth in the back half, Dan, you think is more units versus ASPs? You know, there is a sense that, there's obviously a lot more value that these, new products, right, whether it's Genoa or Bergamo, are providing. Is the back half, that 50% growth, you know, is that predicated a lot more on ASPs rather than units, or how is that balanced?
Yeah, this is a really good question because when you with every new generation for us, we're bringing more value to the customer. It could be in the form of more cores, more features like I just talked about in terms of DDR5 and CXL memory expansion, and our goal is to cover more workloads. Our main goal is to deliver more value to the customer in terms of TCO. TCO, the equation you could think of as performance and feat/features per dollar. Every generation we build, it's a close look at: What are we bringing to the customer? What's the value to the customer, such that they deploy as quickly as possible when we have the products available?
Right.
We believe we, you know, we hit that very, very well with Genoa. We believe we're gonna hit that with Bergamo and the entire Zen 4 family. When you go from Milan to Genoa to Bergamo, you're going from 64 cores to 96 cores to 128 cores. Naturally, there's going to be an ASP uplift. We will be seeing a good ASP uplift. However, we are very focused on the value that is coming to the customer. Even with that ASP uplift, our customers are getting dramatic value, and that's the feedback they're giving us, that the TCO win across both the cloud and the end enterprise, top verticals in the end enterprise, in the Fortune 500 class, are really talking about the TCO value that they're seeing.
For us, ASPs will increase and of course, there's a unit growth expectation also.
Got it. Your main competitor, right, is launching their Sapphire Rapids product. I think they've said that they already shipped, I think, 1 million of those or so forth, right, until their Q1. Do you think this head-to-head competition in this generation is, you know, different, better, worse versus the prior generation? Like, are they getting any closer to, you know, kind of slowing down your very torrid pace of share gains in the data center?
We've have, you know, a lot of cycles and performance modeling on Genoa as well as our customers. I can tell you this, from our, from what we see today and from what our customers are saying, there is a clear performance, energy efficiency, and TCO leadership today. Very strong leadership, and we continue to see that in the feedback from the customers. From an overall standpoint, in terms of performance, we believe we are extremely well-positioned with Genoa, and we will be with Bergamo. The other point I already raised, but Milan holds up extremely well to Sapphire. Across many, many workloads, it's higher performance, and it obviously comes at a better price because it's Gen 4 PCI Express and is DDR4.
I call it this one-two value punch of, for the extreme performance workloads, customers will use Genoa, but for some other workloads like storage and maybe some other, you know, virtualization environments, they can use Milan and get every bit of the performance better than Sapphire and get a better price advantage. We believe that we are extremely well-positioned and, you know, poised for more share growth.
Got it. Do you think, you know, if your competitor is able to achieve, you know, process parity, that can they, you know, leap ahead in any way? I know it's a little premature to talk about, right, products that are out in 2024, 2025. We have not seen any performance yet, right? When you look at the roadmap of, you know, Emerald Rapids or Granite Rapids, right, platforms, you know, their contention that they will be able to achieve process leadership, right, or at least parity, do you think that changes the way you look at your roadmap? Or you think that, you know, the competitive environment could be different going forward?
Well, look, we always look at what our competition is doing, but for us, it is a maniacal focus on what we need to go execute. There's really three reasons why we believe we will continue on our execution cadence. First and foremost, our partnership with TSMC. Mark Papermaster, who runs Technology and Engineering for us, always talks about the relationship in terms of a co-optimization mode.
Right.
We are out with Zen 4 is based on 5nm. Zen 5 will be based on 4nm and 3nm, and we continuously work with TSMC engineers to get as much possible out of that process, and we've done that obviously across both 5nm and 4nm and 3nm based on what we have back for next-gen. The other piece is design innovation. Our team is very, very focused on architecture and design innovation to continuously push the process. Whatever we get out of the process, we push harder on.
Right.
On architecture. Lastly, we are a leader in chiplet integration. We basically started with Naples on chiplet integration, and our advanced packaging around 2.5D and 3D stacking with Genoa-X will keep us continuously advancing. Moore's Law certainly has slowed, but we think those three things: co-optimization, design innovation, and advanced packaging leadership will keep us ahead and keep us on the path that, you know, our roadmap dictates.
Right. Is there, Dan, any natural limit to how much share AMD can have in cloud servers? you know, so for example, when we look at on a blended basis, you know, we think it's over 30% as an example, but in certain, you know, cases it could be 50% plus. Is there any natural limit to how much AMD's share can be? Or you think that once it kind of gets towards that 50% range, then it starts to mature, or there is no such limit?
It's an interesting question. We don't typically talk publicly about our share goals or anything like that, for me, when I look at the, you know, defining a product roadmap and then execution of it, we are driving for maximum share.
Right.
We certainly believe that if we continue to deliver the value and the TCO to the customer, that we can continue to gain share. Cloud, we are very well entrenched. In enterprise, we are very focused on continuously driving more share gain because as Lisa has mentioned many times, we're underrepresented there, and things are going extremely well there. There's some upside there for us, but, you know, we're not giving out any new share gain targets, but, you know, we believe that our roadmap is set up such that we'll continue share gains.
Got it. What is AMD's AI strategy? I ask that from, you know, the external perception is that, look, there is this really large and focused, right, GPU player, right. It's general purpose, right, they have the scale advantages and general purpose advantages. Then you have ASIC solutions, right, from the Broadcoms and Marvells of the world. Given that the market seems to be bifurcated in those two extremes right now, what is AMD's lane to really differentiate? So essentially, how do you define your AI strategy?
Sure. Next week, I hope everyone will tune in to our data center event, because there'll be.
I was hoping for a preview of that.
You know, we will go through a little bit of more details on the AI strategy, but let me just talk in high levels. What we just did, we've been very focused on AI for quite some time, right? We have Instinct family that's out there, MI250 is in the market today, in supercomputing and doing a lot of large language model training with LUMI supercomputer. We are developing the MI300, which is, when you look at it, CPU, GPU, integrated HBM, delivering unmatched memory bandwidth and memory capacity. We're very excited about that portfolio and the pipeline we have, and we'll see that come online and ramp in Q4 across both supercomputing and AI.
Overall, when you look at it, we just combined a number of different groups within the company under Victor Peng's leadership, to drive a hardware, software, library optimization model across all of our products. When you think about how we look at AI, it's really client to the edge to the cloud, integrated AI capability with overarching software stacks, ROCm for training, and then our UIF for inference. We're very excited about the progress we're making. We're very excited about bringing the MI300 to market. We also are leveraging the Xilinx AI Engine, which is just, think about it as just a scalable hard IP that we've integrated into the Ryzen client device and the Alveo V70 inference accelerator card. Again, this is filling out the portfolio from client through the edge to the cloud.
Again, we're very excited about the progress. We're very excited about the customer feedback and the engagement on the software side also. It's not just silicon, as we know. You know, under Victor's leadership, you'll hear more about this next week, you know, the pipeline, and as everyone knows, the pipeline and the opportunity here has grown pretty significantly over the last six months, and we feel like we're in a pretty good position.
Got it. Conceptually, do you think that, you know, whether it is your MI300 or Grace Hopper or other solution, right? Do you think these converged CPU, GPU platforms will gain a big share of the market, or do you think that the market will kind of stay majority discrete, right? Whether it's your CPU or your competitor's CPU and a discrete accelerator solution, whether it's yours or somebody else's.
Yeah, it's a good question. When we looked at the MI300, and this is the way we go to market all the time. With the MI300, we sat with our customers and said, "What problems are we trying to solve at the system level for both supercomputing and AI?" That's how we came up with, you know, the integrated Zen 4 cores with the Instinct CDNA cores, and then the HBM memory. We believe that's gonna be a very good solution, but like I said, for both supercomputing and AI. We also know that Genoa can deliver and will continue to deliver very strong host node support for training and smaller model inference, maybe, you know, smaller recommendation engines, image recognition, and things like that. Our position is, we bring maximum flexibility to our customer.
If they want to leverage the MI300, we're very excited about that, and we can provide that solution. Then, if they're going to do a separate accelerator, we can win that also with the Genoa product.
All right. How big of an obstacle is software? Because the incumbent will say that, "Look, I've been investing in software, right, on GPUs and accelerators for, like, 10 or 15 years." How do you suddenly catch up to that kind of a lead in the industry?
Yeah, it's definitely very, very important, and that's why we made the organizational change, and we are making a lot of very, very rapid progress. We're targeting and engaging with the top hyperscalers that have software capability too. You know, we're very, very focused on winning in some of these large language models for both inference and training. We have a torrid pace with this group, and, you know, the group has a lot of the Xilinx folks in it, and we're pretty excited about the progress right now. You know, it will take some time, but we are definitely making a lot of progress, especially where we're focused, which is the top hyperscalers.
Got it. One last one, what's the importance of kind of bundling the compute and the switching side, right? Again, because, we saw AMD succeed extremely well on the CPU side, right? Just amazing e-execution there. In the AI market, do you think having that, you know, combination of both the compute and the switching, whether it's InfiniBand or Ethernet, does that become a part of the solution, or do you think customers are still gonna buy best of breed, so they are, you know, perfectly capable of, you know, mixing and matching somebody else's switch with your compute?
Yeah, Vivek, I think it'll be a mix, right? That's partly why we bought Pensando a year ago. In fact, I'm sorry, we just closed the one-year anniversary of the Pensando acquisition.
Right.
We're very excited because they bring DPU technology, and DPU technology is critical to exactly what you just talked about, just the acceleration of the network or security or storage. They are just a tremendous team because I think the industry has learned over the last five to seven years that it's one thing to build a really good chip, and they've done that, but the software architecture and the system solution is super critical, and this is where this team excels. They have a full stack that can be either customized for any particular customer or just offered off the shelf. That is part of the broader view we have across the data center, where I mentioned we want to be the most strategic supplier.
We have CPUs, we have GPUs, we have DPUs, and we have adaptive SoCs from Xilinx. We believe we can solve these solutions with customers, but I do believe it'll be a mix in terms of how customers decide to solve it.
Got it. How do you allocate resources then? You know, there are advantages and disadvantages of having just one, and I imagine there are probably, you know, pros and cons of having multiple, right, solutions. How do you allocate resources, right, when you have so many different options available? Or is it very workload or customer dependent?
It's exactly that. It's very workload and customer dependent in terms of where we invest. You know, at this point, our focus, as I just talked about, is sort of delivering the server CPU through Zen 4 and, of course, next year with Zen 5. Then with the MI300, getting that to market with the top hyperscalers. The DPU business is growing, and they're on their second generation DPU, and that's across multiple hyperscalers today. We show up at our customer with a complete solution and allow them to mix and match, and then we'll optimize based on their demand.
Right. You know, one other recent point of kind of industry discussion has been that overall budgets are not growing, right? AI and accelerators are taking so much more of the budget. How do you see that aspect impacting AMD? Like, if you look at your data center business, are there parts, you know, that could be left behind because then they have to be sacrificed to make room, for the AI part of the budget?
Well, there's no question that there is an explosion of AI interest across virtually every enterprise. Every enterprise is trying to figure out how to leverage AI to become more efficient, no question about it. For us, you know, we see that from a CPU standpoint, CPUs still have a very big play in this AI conversation, right? Like I talked about. There's a host node for training opportunity, there's CPU inference that will stay, and then there's the broad swath of complete applications that the CPUs have been serving for 30 years that are not going away. We do believe that there'll be, you know, a good complement of CPUs continuing forward, and then with the MI300, we'll provide a lot of value there also.
Got it. On process node, right? AMD has managed to keep a very strong cadence, right? Every two years, I believe you have managed to. Should we expect that to continue? Like, I imagine you're already looking at, right, whether it's 3nm, you know, things beyond that. The reason I ask that question is because generally, I think the consumer industry has driven new manufacturing nodes. As the consumer industry slows down, does that impact the cost, right, the availability, the reliability of more advanced nodes? Like, does the slowdown in the consumer industry impact your ability to take advantage of leading-edge nodes?
I don't think it does. You know, what we see, you know, our roadmaps are driving to lower geometries, and we will continue, as I mentioned earlier, to partner with TSMC on this co-optimization piece. You know, we are, we have a clear roadmap, and we're executing to it, and there's, you know, I don't believe there's any stopping going forward here in terms of next node. Moore's Law is slowing, so you do need to have those other two parts to it, right? In terms of advanced packaging and this design innovation and constantly looking at the architecture to optimize for the process node. We do not see it slowing in terms of our execution cadence.
Got it. The ASP improvement, right, in your products, is there any natural limit to that? Or, like, how do customers look at that ASP increase, right? Whether it's 15%, 20%, whatever it has been generation on generation. Is there a limit to how much the customers will be willing to bear CPU ASP increases? Again, because if more of the workload is going to the accelerator, then does it not mean that the value the CPU is adding comes down over time?
Yeah, I'm gonna go back to what I mentioned earlier. It's all about the performance per dollar on a gen-to-gen basis.
Mm-hmm.
To perform a certain task, what is the customer getting in terms of performance features per their $ spent? We spend a great deal of time with that across workloads. This is not just a broad spec-in conversation. This is a, how do we provide the most value for that next generation? At the end of the day, when we put out a new product, we want to enable our customers to adopt it as quickly as possible to get that TCO advantage. That's really what we're focusing, and it is workload dependent and actually segment dependent too.
Right.
You know, we believe we're gonna continue to deliver that TCO for the customer on a gen-to-gen basis.
Got it. When it comes to the acquisition of Xilinx, that was completed, right, I think almost 1 year plus ago, what has been the synergies that you have seen so far? Because, you know, AMD has very strong, right, footprint now in the cloud, right, improving in enterprise. What about telco and embedded areas? How is that part of your portfolio?
Telco and embedded are great areas for Xilinx, and my history dates back to this business.
Right.
The embedded business for us, as you've seen, has been very, very strong, and we're very excited about it. The cross-selling opportunities have been great. Their Xilinx, or the former Xilinx, relationships are very strong across automotive, telco, and some edge applications. They've brought not only EPYC-type opportunities, but the client opportunities with Ryzen also. There's been a very good cross-selling and integration of the sales teams at all of these broader segments. If you think about the embedded business, it's a number of different verticals. It's automotive, it's industrial, you know, it's a number of different separate verticals. It's imaging, it's video, and their relationships are very strong because at the end of the day, they're helping customers actually design chips. That relationship is very tight, and the cross-selling has gone extremely well.
Conversely, our engagements with some of the top hyperscalers have helped them also. I think it's gone as good as we could have expected in terms of the cross-selling and the synergies between the different groups within Xilinx and within AMD.
Got it. On enterprise, I know we spoke a lot about the cloud. How is AMD's position in the enterprise? You know, that's been historically an area that AMD has been left less represented in.
Yep.
What are you doing to... is it a case of the CIOs making the decision, or is it that you have to make the case to the OEM, so they can make the case for you? How does that go to market different versus cloud?
It's a great question, Vivek, I'm very excited about our enterprise traction. I mentioned it earlier that the inflection point was Milan. We continued that with Genoa, delivering, again, increased performance and a better TCO and energy efficiency. The testing that we're seeing and the adoption is going very well. We've added a lot more feet on the street, and even within my own business unit side of things, we've added a lot of different skills for the top verticals across the enterprise and the top, you know, Fortune 500. We feel very good. We're expanding our portfolio of platforms.
All of the top OEM players have products, Genoa-based products, in production. We're also projecting a very large growth in the number of platforms that we'll have. We'll be able to address more of the actual end enterprise market as we go forward with Genoa. Couple that with Genoa X, that is very focused on a different part of the enterprise market.
Right.
We feel very good about the products we're bringing to market. We are getting to the end customer, and we are definitely. Your question originally was, is it one or the other? It's really both. We work hand in hand with the OEMs and the entire channel, and then we also go to the end customer in addition.
Makes sense. Terrific. Thank you so much, Dan. Pleasure to host you. Really appreciate your time.
Thank you, Vivek.
Thanks, everyone.