Greetings, and welcome to the AMD announcement of the signing of a definitive agreement to acquire ZT Systems conference call. At this time, all participants are in a listen-only mode. A question and answer session will follow the formal presentation. If anyone should require operator assistance during the conference, please press star zero on your telephone keypad. As a reminder, this conference is being recorded. I would now like to turn the conference over to your host, Mr. Mitch Haws, Investor Relations for AMD. Thank you. You may begin.
Thank you, and welcome to today's conference call about AMD's announcement regarding our proposed acquisition of ZT Systems. By now, you should have had the opportunity to review a copy of the press release and Form 8-K announcing the transaction. If you have not had the chance to review the release or the Form 8-K, they can be found on the Investor Relations page of amd.com. In addition, following this call, we will post the slide presentation on amd.com. Participants on today's call are Dr. Lisa Su, our Chair and Chief Executive Officer, and Jean Hu, our Executive Vice President, Chief Financial Officer, and Treasurer. This is a live call and will be replayed via webcast on our website.
Today's discussion contains forward-looking statements based on current beliefs, assumptions, and expectations that speak only as of today, and as such, involve risks and uncertainties that could cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release regarding today's announcement and our reports filed with the SEC for more information on factors that could cause actual results to differ materially from these forward-looking statements. As a reminder, we will focus the discussion today exclusively on the proposed acquisition. With that, I will hand the call over to Lisa.
Thank you, Mitch, and good morning, everyone. I'm excited to announce today that we have signed a definitive agreement to acquire ZT Systems, a leading provider of AI infrastructure to the largest hyperscale customers in the world. This is the next major step in our long-term strategy to deliver leadership AI solutions. The ZT team complements our silicon and software capabilities with critical systems expertise needed to deliver full AI solutions, from silicon to software to rack and cluster-level solutions. With ZT, our largest hyperscale customers will be able to more rapidly deploy optimized AMD AI infrastructure, while our OEM and ODM partners will have access to rack-level designs that will speed their time to market. Just to set some context, we have invested significantly over the last ten-plus years to deliver our multi-generation product and technology roadmaps.
Starting with our Zen, CDNA, RDNA, and XDNA architectures that power our leadership CPUs, GPUs, FPGAs, and adaptive SoCs. We have been an early adopter of leading-edge manufacturing nodes, and we have led the industry with our chiplet designs using the most advanced 2.5D and 3D chiplet packaging. We have invested in the key technologies needed across the data center space while strengthening our software capabilities and making it easier for customers to deploy and unlock the full potential of our hardware. As a result, our portfolio of high-performance and adaptive computing products spans across cloud, HPC, enterprise, embedded, and PC markets. The next major arc for AMD is AI. AI is the most transformational technology of the last 50 years and our number one strategic priority. We believe AI has the potential to drive unprecedented growth over the coming years.
Just in the data center, we see 2024 as the start of a multi-year AI adoption cycle, with the accelerator markets expected to grow to $400 billion in 2027. We launched our MI300 accelerator family in December, and customer response has been overwhelmingly positive. In addition to large-scale deployments with Microsoft, Meta, and Oracle, as well as other cloud customers, all major OEMs and the leading server manufacturers offer MI300 solutions. As a result, MI300 has become the fastest-ramping product in our history, and we're on track to deliver more than $4.5 billion of data center GPU revenue in 2024. Now, as important as the hardware is, we know that software is critical to enabling widespread adoption.
We've made significant investments in our software capabilities and ecosystem in recent years, delivering powerful new features and capabilities in our ROCm software stack. We have also built out a broad ecosystem of partnerships with the open source community, including support for AMD hardware in many of the most widely used AI frameworks, libraries, and models, including PyTorch, JAX, TensorFlow, ONNX, vLLM, Triton, and Hugging Face. Now, it's clear that maintaining this incredible pace of model and software innovation will require a more expansive view of hardware innovation. We recently announced a new roadmap, which delivers an annual cadence of new AI accelerators. We are very excited about our next generation MI325X, MI350 series, and MI400 series products. But to make AMD the AI infrastructure leader and meet the compute demands of next-generation frontier models, we must offer more than leadership components.
We must have the capabilities and expertise to optimize solutions at the systems, rack, and even the data center level. Which is why today we are announcing our agreement to acquire ZT Systems, the industry's leading provider of AI and general-purpose compute infrastructure for the world's largest hyperscale companies. Founded 30 years ago, ZT has focused for the past 15 years solely on providing compute infrastructure solutions to hyperscale data centers, giving them unique insights to the needs of the cloud. ZT has more than 1,000 world-class design engineers with deep expertise in motherboard, power, thermal, networking, and rack design … that enables them to partner closely with large cloud providers to design highly optimized system and cluster-level solutions. They understand the challenges of designing and managing high performance and high-density systems at a massive scale.
Over seven years ago, ZT began developing optimized AI systems and cluster-level solutions for the leaders in the cloud to power the AI revolution. During that time, ZT and AMD have closely collaborated on both AI and general compute platforms for large cloud customers, powered by EPYC CPUs and Instinct MI250X and MI300X accelerators. As the demand for generative AI solutions accelerated several years ago, ZT scaled quickly to become one of the leading providers of AI infrastructure systems. With leadership, U.S. and European manufacturing capabilities, and hundreds of service professionals, ZT ships hundreds of thousands of servers and tens of thousands of AI racks per year to the largest hyperscale cloud companies with industry-leading quality. ZT's deep expertise across system design, validation, networking, and test significantly strengthens AMD's AI systems and customer enablement capabilities.
ZT engineers will provide us with crucial insights that will enable us to fully align our silicon and software roadmaps with the evolving needs of cluster scale systems and hyperscale data center operations. Systems and cluster designs will also be brought up fully in parallel with silicon design, greatly accelerating the time to validate and deploy solutions at data center scale. The ZT service enablement engineers will also speed the deployment and provisioning of AI clusters, enabling AMD customers to bring AI solutions into production significantly faster. With the rapid pace of innovation in the AI market, our ability to reduce the end-to-end deployment time of cluster-level solutions will be a significant competitive advantage. ZT's world-class cluster-level system capabilities will enable AMD to deliver production-ready AI solutions at the board, system, rack, and data center level.
It is important to note that our strategy around open ecosystems and providing our customers choice remains the same. We are not planning to compete with our customers or drive proprietary solutions. This acquisition is about providing our customers with a stronger and more complete roadmap of AI solutions, and enabling them to innovate on top of our platform. AMD will continue working closely with our broad set of OEM and ODM partners to deliver optimized solutions to market with our CPU, GPU, networking, and new systems capabilities. To that end, we will be seeking a strategic partner to acquire ZT Systems' industry-leading, U.S.-based data center infrastructure manufacturing business, following the close of the transaction. We believe ZT Systems' manufacturing business will be a very attractive asset to multiple players in the ecosystem, given the scale of the business and the U.S. and European footprint.
We have invested significantly in recent years to expand and accelerate our AI hardware roadmaps, strengthen our AI software capabilities, and build out a broad ecosystem of partners who support our strategy. With the acquisition of ZT, we take another major step forward in strengthening our AI capabilities, adding to our leadership, hardware and software roadmaps with best-in-class systems and rack-level capabilities. Now, I'd like to turn the call over to Jean to provide some additional color on the transaction.
Thank you, Lisa. We're excited about the opportunity to acquire ZT Systems, as it accelerates our AI strategy and aligns with our focus on allocating capital toward the highest growth opportunities. We believe this acquisition will play a key role in driving long-term growth and the returns for our shareholders. Let me briefly review the details of the transaction. The purchase price of $4.9 billion is inclusive of contingent payment of up to $400 million, which will be based on the achievement of certain post-closing milestones. The consideration consists of 75% cash and 25% AMD common stock. We plan to finance the transaction with cash available on our balance sheet and additional debt financing. Upon the close of the transaction, we will add approximately one thousand employees with extensive systems and data center service expertise, resulting in approximately $150 million of annualized operating expense.
We expect minimum dilution in the first year and plan for the transaction to be accretive on a run-rate basis, exiting calendar 2025, as we expect the revenue from service support and additional GPU sales to offset the minor dilution of the transaction. As Lisa mentioned, we will be seeking a strategic partner to acquire ZT's industry-leading manufacturing business after the closing of our acquisition of ZT. We believe this is a very attractive asset, with more than $10 billion in revenue in the last 12 months, and the U.S. and European-based manufacturing footprint with room to scale. From a financial reporting standpoint, we expect to classify the ZT manufacturing business as held for sale and report the manufacturing operating results as discontinued operations. We expect transaction to close in the first half of 2025, subject to regulatory approvals and other customary closing conditions.
With that, I'll turn the call back to Lisa.
Thanks, Jean, so let me just summarize here. With the addition of ZT, AMD will now have everything required to offer highly optimized systems to the largest cloud companies and provide ready-to-deploy designs that can be rapidly adopted by our OEM and ODM partners. These solutions will be built around our leadership CPU, GPU, and networking portfolios, and enabled through a differentiated strategy that combines open source software, industry standard networking technologies, and now ZT's leadership systems capabilities. This is an incredibly exciting time for the industry, as the widespread deployment of AI is driving demand for significantly more compute across a broad range of markets, and it's an even more exciting time for AMD as we take this next major step in our AI journey. The acquisition of ZT will enable us to bring even more innovation to the market together with our ecosystem of partners.
Now let me turn the call back to Mitch for the Q&A.
Thank you, Lisa. Operator, we're happy to poll the audience for questions.
Thank you. If you'd like to ask a question, please press star one on your telephone keypad. A confirmation tone will indicate your line is in the question queue. You may press star two if you'd like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. To allow for as many questions as possible, we ask that you each keep to one question and one follow-up. Thank you. Our first question comes from the line of Matt Ramsay with TD Cowen. Please proceed with your question.
Yes, thank you very much. Good morning, everybody. Now, Lisa, Jean, congrats on the transaction here. Lisa, I just wanted to, this seems to me like, I guess the market's initial reaction is maybe that you're gonna go and do full rack-scale architecture and sort of design and build servers and sell them yourselves. But in some of your comments there, you sort of indicated that this is about doing customized server-level architecture and then allowing your OEM and ODM partners to still work with you in a traditional way, but to bring them to market faster with customized solutions.
So maybe you could spend a little bit of time and draw that distinction out a little bit further for us of exactly where the lines are drawn between what you now intend to potentially do internally versus the relationships you've had with your OEM and ODM partners traditionally. And then I have a follow-up. Thanks.
Yeah, absolutely, Matt. Thank you for the question. Look, you know, the way, as we said, you know, we're in a place where these AI systems are getting more and more complex, and the key is, how do we get, you know, sort of this leadership capabilities into market as fast as possible? And that's what we're doing with ZT Systems. So we're gonna add a very talented group of design engineers who know everything about deploying rack-scale, you know, infrastructure, you know, in the cloud environments. It's going to help ensure that our solutions and our architecture are much more optimized on day one. We're very focused on, you know, our largest cloud customers who will benefit from this design capability.
And for our OEM and ODM partners, we're gonna continue working with them as we always have. You know, we'll give them a reference design, and they will innovate, you know, on top of that. I think the thing that we've learned, you know, in talking to our customers is, you know, they're very diverse data center environments. There's not a one-size-fits-all in terms of of systems, but there are a set of foundational capabilities in terms of how do you optimize systems? How do you ensure that they get operational as fast as possible? How do you ensure their reliability is as good as possible, manageability, all of those things? And, you know, this team has been doing this, you know, for the last many years in the most demanding cloud environments.
And so they will significantly add to our systems capability and allow us to work more closely with our largest cloud customers, as well as our OEM and ODM partners.
Got it. Now, thank you for that. I guess as my follow-up, Jean, a little bit on the transaction, you guys mentioned in the script, the large sort of systems manufacturing business that you intend to divest post-transaction. So, if I heard you right, you're keeping $150 million, give or take, run rate and OpEx, then this will be somewhere break even, close to it at close, so $150 million, give or take, gross profit. How much revenue run rate of where the business is now do you intend to keep? And what is the... I mean, we see one of your major competitors that sells full servers doing 80% gross margin in data center. There are some ODMs that do 15%.
I'm just trying to get a feel for where this is gonna land in the model. Thanks.
Yeah, thank you, Matt, for the question. First is, we're quite excited. As Lisa mentioned, we're adding approximately 1,000 very talented engineers. So that's about $150 million annualized operating expense. From a revenue perspective, the service and the enablement portion of the company, the revenue is not big. So that's why, you know, between the OpEx increase and some interest costs, there will have some very minor dilution at the beginning of the transaction. But exiting 2025, we do expect the services support revenue as well as the additional GPU sales will offset the minor dilution and get us to accretive. Most importantly, this is an investment for the longer term to drive the longer term growth in this fast-growing market.
All right. Thank you very much. I'll jump back in the queue.
Thank you. Our next question comes from the line of Aaron Rakers with Wells Fargo. Please proceed with your question.
Yeah, thanks for taking the question. Also, congrats on the transaction. So, I guess building on the last question, I'm just curious, you know, Lisa, you mentioned again, there's just this emphasis around, you know, optimizing an annual cadence of your, of your GPU roadmap with a full stack system strategy. You know, the experience you guys have had so far with the MI300 relative to what this brings to the table, is there any kind of context you could, you know, provide us with, with regard to, you know, how you define success? What do you, you know, what does that timetable look like as far as, you know, time to deployment? How does that change necessarily with this acquisition?
I'm just curious if you could unpack that a little bit more, and I do have a follow-up.
Yeah, sure, Aaron. Thanks for the question. You know, look, we're super happy with how the MI300X has ramped. You know, we've had you know, very good adoption by some of the largest hyperscalers, and, you know, we continue to work with a number of you know, large accounts, both enterprise and cloud, as well as AI companies to adopt MI300. What we see going forward, though, when you think about this AI infrastructure and as we you know, go forward in the next year, several years you know, as these model sizes get larger, as clusters get larger, whether you're talking about large training clusters or you know, large-scale inference, these are just very complicated you know, data center class you know, systems you know, opportunities.
And so what we really see with this team is it will help accelerate AMD at large scale, and you know, think about that as we go into the MI350 series, the MI400 series, you know, very complex systems. It's about CPUs, GPUs, networking, you know, systems, you know, clusters. How do you ensure that they have the right reliability? You know, this team will help us do that, because they've done it. I mean, they've done it at scale today, and what it allows us to do, when we bring them under the AMD umbrella, is we can actually do quite a bit of development in parallel, so as we're validating our silicon systems, we're also in parallel, you know, doing the systems infrastructure and doing that in partnership with our largest customers.
I see it as very additive to the efforts that we already have, and you know, it will enable us, you know, again, to help our customers deploy the technology as fast as possible. It's also important, and I just wanna go back a little bit to the question that Matt you know raised, and you're sort of talking about, Aaron, is you know this is not about taking away customers' choice, right? We're not saying everybody has to take you know our design that we're designing as you know with the ZT Systems team. What we're saying is we're gonna provide a much much stronger foundation. You know some hyperscalers are gonna want you know different optimizations of those systems designs, and we'll have the team to do that.
The OEMs and ODMs will also innovate on top of that, you know, with their, you know, systems capability, and it just gives the AMD ecosystem a big turbocharge, in terms of design capabilities, around our products.
Yep, that's very helpful. And then maybe just building on that a little bit, can you talk about how networking plays a role in this, and whether or not, you know, this acquisition provides you with a platform to delve deeper into networking as we look at, you know, the optimization that your big competitors are doing, you know, with their internal networking capabilities? Just, you know, is there anything to read into here, as an accelerator to that part of the strategy?
Yeah, absolutely, Aaron. Look, we think, you know, networking, along with the rest of the compute technologies and cluster-level technologies, is really important. We've continued to invest in networking, both internally as well as with our partners. And yes, I think the acquisition and the addition of the ZT Systems engineers, who have spent a lot of time on how do you connect these large-scale AI clusters, is gonna be very helpful. And on the networking side, we have our, you know, Pensando team, who is, you know, very involved in some of the optimizations of our next-generation AI systems.
And then we're working, you know, closely across our consortiums with Ultra Ethernet Consortium as well as the UALink, you know, group to also ensure that we have very strong networking technologies that are industry standard.
Yep. Thank you.
Thanks, Aaron.
Thank you. Our next question comes from the line of Harsh Kumar with Piper Sandler. Please proceed with your question.
Yeah. Hey, Lisa. Thank you. Congratulations on the deal. I had a question about the rationale as well. This sounds like more about defining the rack systems designed to the AMD standards. Historically, AMD's been all about open market, open standards. So was the market-based open standards approach for rack-level design taking too long? Was it spread out? Was it too dysfunctional that you decided to bring this design approach in-house and sort of show the people how it needs to be done, or at least the base standard level? And then for my second one, I'll just ask it upfront: Have you found a partner for sale of the manufacturing piece? Thanks, Lisa.
Sure. Yeah, thank you, Harsh. So, let me answer both questions. So to your first question, we are absolutely about open standards and an open ecosystem, so that really doesn't change. I think what this acquisition is about is that, you know, these large-scale clusters are getting more and more complex. And for AMD, we can actually help the ecosystem, you know, really get deployed and at scale faster. So that, that's really what this is about. We will still see a lot of innovation at the individual, you know, system level. But what we have is now a very, very capable, you know, set of design engineers who can actually help our customers get the solutions to market. So think of it as very additive, Harsh.
Not replacing anything that's out there in the open ecosystem, but really adding to it with especially surrounding our technologies and adoption of our technologies. And then to your second question about the manufacturing partner. Look, the manufacturing assets here are actually very attractive. I think, you know, we mentioned that it's primarily based in the U.S., with also some European capabilities. You know, we think there will be multiple partners who would be interested in the manufacturing business, and we will, you know, take care of that as we get, you know, through close of the transaction.
Thank you, Lisa.
Thanks, Harsh.
Thank you. Our next question comes from the line of Vivek Arya with Bank of America Securities. Please proceed with your question.
Thanks for taking my question. Lisa, I'm still unclear what the 2025 and 2026 sales and gross margin contribution is for ZT, so I was hoping for more specificity. I mean, you're mentioning it as accretive, but at what level of sales and gross margins, and how is that worth $4.9 billion or whatever is the net of proceeds? So I was just really hoping for some more specificity, because without that, it's very hard to know what the return on investment is here.
Yeah, absolutely. Let me start, Vivek, and then maybe Jean can add to it. So the way to think about it is, you know, ZT has a full at-scale manufacturing business. We mentioned earlier that if you look at their revenue the last 12 months, it was north of $10 billion. That is mostly manufacturing revenue, and that will again be, you know, separated as part of the manufacturing business post-close. The piece of the revenue and the financials that will come into the AMD P&L, and what we were talking about as it relates to accretive by the end of 2025 , is there is a design enablement and services portion of that business. It's not huge.
It's really the tip of the spear that will come into our P&L. You know, those margins are, you know, typically quite reasonable. You know, we'll have the OpEx of about $150 million or so for the 1,000 engineers. The way we see accretion in our P&L is, you know, obviously we'll keep that services and support revenue, but more importantly, we'll accelerate the overall, you know, sales of our AI capabilities. So GPUs, you know, CPUs, as well as the over-acceleration of our roadmap. You know, selling MI350 series, MI400 series as we go into 2026. So that's the way to think about the P&L. Maybe, Jean, if you wanna add?
Yeah, yeah. I'll just add, one thing is that we will be seeking a strategic partner to acquire this manufacturing business. And, of course, when you think about the investment, it will be the net purchase price. And of course, we'll update you when we get close this deal and get more progress on the sale of manufacturing asset.
Got it. Thank you. For my follow-up, I'm curious how much of the ZT business that you intend to keep relies on their partnership or interaction with NVIDIA or Broadcom or Intel or some of your competitors, and how much of that business do you expect to retain once that deal is closed?
Yeah. So, you know, ZT has a broad set of partners today, and, you know, they're gonna continue business as usual. They are also a significant AMD partner in terms of the, you know, AMD, you know, systems business. And, you know, we would continue to expect that, that will, you know, scale as we, as we go forward. So I think no change to their current business.
Yeah, I'll just add, right, the services support revenue we talk about, those are with the largest hyperscale cloud customers. So we do intend to continue to support those largest hyperscale cloud customers.
Understood. Thank you.
Thank you. Our next question comes from the line of Ross Seymore with Deutsche Bank. Please proceed with your question.
Hi. Congratulations on the deal. Lisa, I just wanted to dive a little bit into the accelerated side of things as far as your ability to accelerate the growth. You've moved to that one-year cadence. How does this, the ZT acquisition accelerate that? Or, you know, how much faster can you penetrate the $400 billion? Just... you guys sound like this is kind of a revenue synergy deal, more so than anything on the margin side. So I wanted to just get a little bit more color on the revenue synergies and put some numbers around it.
Yeah, Ross, I think you're understanding it well. You know, the way to think about it is the entire cycle from, you know, silicon development to software optimization, to, you know, systems build-out to full at-scale, you know, production. It, like, takes a while. I mean, it takes. It's one of those, you know, deployments that actually is, you know, pretty complex. I think what this acquisition does is it allows us to really bring some of that in and help our customers deploy faster. It's very significant when you think about an annual cadence of new accelerators or new GPUs that are coming out. You know, we wanna make sure that our customers are able to deploy and use all that technology as fast as possible.
So, you know, we do think it's a significant acceleration to what we would, you know, normally see, and, you know, that will play out as we go into, you know, the next couple of years. It's one of those things that I think as we do more, we also see a sort of the systems knowledge feeding back into our silicon optimization. And so it allows us to go faster, but also allows us to build better solutions going forward.
Thanks, Lisa. I guess as my one follow-up for Jean, just to go back to Vivek's question and maybe try to put a little finer point on it. I know you can't really say the price for which you're going to sell the manufacturing business, but if that's the majority of revenues, I guess all we know right now is you said $4.9 billion is the price. We don't know what the net is gonna be, and you said over $10 billion in revenue, but we don't know how much of that you're actually gonna keep. So it sounds like we know or you believe you're gonna have greater than $150 million in gross profit at the exit of next year on an annualized rate.
Can you give us any idea as to what the revenue run rate would be for the remaining asset, the gross profit, or gross margin that you would target over time? Any more color on those points?
Yeah, thanks for the question. So when we said that, more than $10 billion revenue, those are all manufacturing revenue. Let's to be clear about that. Those will be classified as discontinued operations from operating result perspective. And so, on the other side, though, for the business that we're going to keep, the way to think about the services support model is, you know, the top-line revenue, as Lisa mentioned, that they are not big, and the operating expense we're going to add is about $150 million annualized the run rate. The gross margin is not too different from AMD's corporate gross margin. So overall, from the first year after we close in the first half of 2025, the financial impact is minimum, right?
From both top line gross margin and the bottom line, the dilution is very, very minor. And once we accelerate our time to market exiting 2025 , you're going to expect us to continue to sell more GPUs and offset more than offset the dilution from this transaction.
Thank you.
Thank you. Our next question comes from the line of Stacy Rasgon with Bernstein Research. Please proceed with your question.
Hi, guys. Thanks for taking my questions. I mean, I don't, I don't wanna harp on this. I guess maybe I do wanna keep harping on it, sorry. Is it just fair to classify the current revenue just because it effectively rounds to zero? And then on the accretion, it does... So basically, all of the accretion effectively comes from just selling more GPUs. That's where this comes in. So, like, what do you expect the accretion to be exiting 2025? Is it just touching break even? Do you expect significant accretion? I'm just trying to get some feeling for, which I think everybody else on the call is, for how much of your GPU sales do you actually feel like this accelerates, above and beyond, like, what you currently think you would do without this, I guess, into 2026 and beyond?
Hi, Stacy. Thank you. Since you know we expect to close the transaction in the first half of 2025 , the way to think about it is this is investment for the long term. In 2025 , exiting the calendar year 2025 , you're right, it's about just break even, but longer term in 2026 , we do expect the acceleration of time to market will accelerate our top-line revenue growth.
Got it. And I guess for my follow-up, you know, you've been buying bits and pieces over the last year or so to build out the ecosystem with Pensando and others. Do you need anything else to sort of complete the portfolio relative to what your broader competitor has already built? Like, what else should we be looking for, or have you now got the complete portfolio you need to that you think you need to scale this?
Yeah, Stacy, let me take that. You know, look, we have been investing significantly, both organically and inorganically. So if you look at, our investment in capital allocation, you know, clearly it's around AI, that's in the AI hardware roadmaps, the AI software roadmaps, and then, you know, in this case, we add a tremendous systems capability. So I do feel like we have a very complete portfolio, in terms of, you know, what the future AI systems look like. And look, we'll continue to look at how do we, aggressively, add to our, capabilities. So and that's on a both, you know, organic and inorganic, basis.
Like, what kind of capabilities?
It's around ensuring that our customers can use our silicon software and solutions as quickly and easily as possible.
Got it. Okay. Thank you, guys. Appreciate it.
Thank you. Our next question comes from the line of Toshiya Hari with Goldman Sachs. Please proceed with your question.
Hi, good morning. Thank you so much for taking the question. I had a high-level question regarding competition.
Lisa, how does this acquisition position you relative to the nearest competitor? Do you see yourselves as being sort of in line with NVIDIA at this point from a capability standpoint? Do you believe you're fundamentally better positioned? And if you believe you're fundamentally better positioned, if you can walk us through how that is, that would be really helpful. And then I have a follow-up.
Sure. Sure, Toshiya, thanks for the question. So, first, you know, if I take a step back and you know, give you, you know, again, some context. You know, we have been investing heavily in the hardware and the silicon roadmaps. I think if you look at our roadmap, the annual cadence, you know, we are leaders in, you know, sort of the chiplet technologies and all of the advanced packaging that is needed to bring that forward. We've invested heavily on the software side, and I think on the software side, we've gotten significantly better with our ROCm capability, and especially for you know, the largest customers.
We've now seen a very significant performance improvements to the point where you know we are in some cases above the competitor, and now on the systems side I think the ZT Systems actually brings a world-class systems capability into AMD. It is. You know, we could have. You know, if you think about you know what we're trying to do here is we're trying to give our customers choice while giving them best-in-class design capability with our technology.
And what that means is, you know, instead of saying, "Hey, you know, customers have to adopt," let's call it a proprietary systems design, we actually can use our systems capability to allow customers to use whatever they believe is the best capability for their workload and their data center environment. So we actually think it's gonna be quite differentiating. It's going to help the adoption of AMD solutions at a much faster pace, and it's also gonna allow our OEMs and ODMs to have, you know, more capable starting point, you know, for their differentiation as well. So I do find I'm, you know, very excited about, I think, what this brings to the overall solutions capability of our products.
That's helpful. Thank you. And then as my follow-up, the $400 million in contingent payments, you speak to certain post-closing milestones. What are those post-closing milestones? Can you expand on that? Thank you.
Yeah, yeah. Thank you. I think at a high level, the way to think about it is, it's contingent on some performance milestones. It's actually related to the manufacturing business. I'll just leave it like that.
Yeah. Maybe let me just add one more point, Tosh, because I haven't mentioned it. I mean, I think we are extremely, you know, pleased and happy to have, you know, both, Frank Zhang, who is the founder and CEO of ZT, who will be joining AMD, running the manufacturing business. And then, you know, Doug, as well, is president of ZT, will be, you know, running the design and systems enablement for all of AMD. So I think we have a very strong team that will be, you know, executing on both sides of the business.
Thank you.
Thank you. Our next question comes from the line of Harlan Sur with JP Morgan. Please proceed with your question.
Yeah, good morning. Thanks for taking my questions, and congratulations on the acquisition. So maybe you can just help us understand the customer benefit, right, with ZT Systems relative to what it is today. So when you started sampling MI300X last year, right, how long did it take for your hyperscale, cloud, and enterprise OEM customers working with AMD to do their server and rack level designs, right? Was that three months? Was that six months? Was that nine months? And so do you believe that with ZT Systems' enablement team, you will, by and large, eliminate much of this server, rack level design, bring up reliability, cycle time, and drive a faster time to market for the customers? Is that sort of the idea here?
Yeah, Harlan, actually, that's a very good question. If you think about from, you know, start of silicon sampling to when production clusters come up for production workloads, it's typically, you know, several quarters. So, you know-
Mm-hmm.
It's not three months, and what we will be able to do is really parallelize some of that and allow us to get from silicon sample to production workloads much faster.
Great. And then as my follow-up, and I know this question, I think, was asked earlier, but, you know, as your enablement team continues to develop these sort of rack-scale architectures, where you have total control of the system's design, you can start to exploit things like specialized connectivity, networking, right? Like, so for example, using your Infinity Fabric for GPU to GPU connectivity, not just within server, but across the entire rack, maybe using a specialized switch, copper, optical connectivity, right? Does ZT Systems have this expertise on the networking side, or is this one of the potential synergies you hope to unlock with your team and ZT Systems' enablement team?
Yeah. Look, ZT Systems' team does have a lot of those capabilities, Harlan, and I think you said it well. It does allow us to unlock these, let's call it, specialized, you know, designs, as, you know-
Mm-hmm
Different cloud manufacturers have different requirements in terms of what they're trying to accomplish, the size of the clusters, the type of networking they wanna use. You know, how are they, you know, optimizing within their data center environment? What does the thermal environment look like? And so for all of those things, we will now be able to work very closely with the end customer, and, optimize, you know, silicon software as well as, you know, rack level system.
Thank you, Lisa.
Thanks, Harlan.
Thank you. Our final question this morning comes from the line of Ben Reitzes with Melius Research. Please proceed with your question.
Hey, thanks, guys, for squeezing me in. My question is with regard to a couple of things. I'll just ask, there's a lot of background noise here in the airport, so I'll ask both at once. Do you have any guidance as to what you think you might get for the manufacturing business? You obviously think there's a break even effect of the deal. I'm wondering if that is net of, you know, your sale of the net price, and if there's any direction on what the offset will be. And then, also, Lisa, if you don't mind, can you talk about how this can work with Silo AI, your other acquisition? Sorry if I missed that earlier, but can these two organizations work together, and how will that be set up? Thanks.
Hi, Ben. I'll answer your the manufacturing sale question first. It's still very early. It's a very attractive asset. We'll give you more color once we close this transaction first. I think as far as accretion it has nothing to do with the sale of manufacturing business. It's more based on our additional revenue from services support and the GPU sales versus you know the operating result of the team we are getting. I'll let Lisa answer the other one.
Yeah, Ben, thanks for the question. Yeah, the first of all, we were happy to complete the Silo AI acquisition last week. The way these things work is, it is for our customers, it's about silicon software and system solutions, so they need all three pieces of it. The Silo acquisition actually brought forward again, you know, 300 very talented AI software folks. They're gonna work directly on customer code and help them optimize their, you know, their code on our hardware, and that will happen immediately and across our portfolio.
And with the ZT Systems, we're working on the other side of it, which is on the systems and cluster-level architecture, which is also very important for our largest customers. So think about it as, you know, what we're trying to do is just make it, you know, as easy as possible, you know, really enable our customers to deploy, you know, quickly. And together with our silicon capability, our software capability, and now our system solutions capability, I think we have, you know, really, you know, a best-in-class, you know, kind of capability to help customers, you know, use AMD technology.
Thank you. Ladies and gentlemen, that concludes our question and answer session. I'll turn the floor back to Mr. Mitch for any final comments.
That concludes today's call. Thanks to all of you for joining us today.
Thank you. This concludes today's conference call. You may disconnect your lines at this time. Thank you for your participation.