Okay, great. We'd like to get started. Good morning, everyone. My name is Toshiya Hari. I cover the U.S. Semiconductor and Semiconductor Cap Equipment space. Very honored, very excited to have Sumit Sadana from Micron. He's the Chief Business Officer for the company. Sumit, thank you so much for supporting the conference.
Thank you for having me, Toshiya.
Great to have you here. I have a list of questions, per usual, but we'd like to keep this interactive. So to the extent you do have questions, please have them ready. I'll go to you in 20 minutes or so. So Sumit, in terms of the near term, it's been two and a half months since you shared guidance for the May quarter. The industry is clearly in a positive stage of the cycle, both on the DRAM side and the NAND side. You know, how has the quarter progressed relative to your expectations and any standouts that you would highlight?
Sure. Just wanted to start with a quick comment that I may be making some forward-looking statements today, and please visit our website and look at our SEC filings on the quarterly and annual reports for a more detailed description of the risk factors. No new update today, but just to summarize, you know, how we look at things right now, we have a consistent view of what we said the last time in terms of how some of the segments have been progressing. We continue to see the PC market unit growth in calendar 2024 to be in the low single-digit range, similar range for smartphone unit volume growth as well, worldwide. Of course, there are average capacity changes taking place in both of those devices.
On the data center side, we had said that we do expect within the first half of calendar 2024 for inventories to get better and more normal ordering patterns to resume from customers. We have continued to see that, and we do expect, you know, better second half demand compared to the first half in data center as all of that inventory gets normalized. And we have continued to see really strong demand on the AI server side, and all of the products related to AI servers have been exceptionally strong. We do expect that the pricing will continue to improve over the course of calendar 2024 as the supply-demand dynamic continues to tighten. And we had spoken about somewhere in the low thirties gross margin for next quarter.
That continues to be our expectation. Our CapEx is more likely to be around $8 billion than the $7.5 billion-$8 billion range. Continue to expect positive free cash flow in FQ3, FQ4, so second half of this fiscal year. And a record year from a revenue perspective in fiscal 2025, that continues to be our expectation, with much improved profitability as well. And again, all driven by what we have said, you know, we do see Micron as one of the primary beneficiaries in the semiconductor industry of this AI trend because of the significant increase in average content of DRAM in particular, increasingly NAND as well, but particularly in DRAM, and our really strong product capabilities on the High Bandwidth Memory side.
That's great. Thank you so much for the overview. We'll definitely come back to AI, HBM, enterprise storage later. But at a high level, I wanted to ask you about customer inventory levels. Last year was obviously a challenging year for you all and for the industry. You know, based on your customer conversations and market intelligence, where are customer inventory levels relative to what you would consider to be normal? And I know you don't have perfect visibility, but across end markets.
Sure.
Where is customer inventory today?
Sure. So yeah, as you said, the customer inventories are different depending on the end customer, and different from a segment perspective as well, and also from a geographic perspective. So it is not easy to make sweeping generalizations, but some trends that I can talk to, the data center inventories have improved quite significantly over the course of the last several quarters. They're in a much better place now. Certainly the AI server growth has driven a lot of that improvement, and there has been some improvement in the traditional compute segment as well, and that is improving even traditional compute-related inventories. On the PC and smartphone side, generally the inventories, you know, very much varying by customer.
Some customers have been concerned about the rising trend of pricing that they see, and they have used that to, you know, build up some inventory, but they also have an expectation of improved content as AI PCs and AI smartphones start to roll out, more so later this year. Our expectation on AI PCs and smartphones is more focused around 2025 growth, and that's where we are looking for some of that to happen. Not too much from a unit perspective, but more in terms of the mix is what we are looking for.
Because the mix step up, as people buy some of these newer smartphones and PCs in order to future-proof those devices, because at some point, applications will come out that are going to be very interesting, very useful from an AI perspective. And consumers will want to have access to hardware features that can enable those applications to run, and that typically means getting to 12 GB of DRAM in a smartphone, which is significantly higher than where it is today, and 16 GB at a minimum on a PC, but a lot of PCs with 24 and 32 GB, and above. Those will be meaningful step-ups in content, and increases in average capacities on the SSD side as well in these, and flash, embedded flash and smartphones.
So we expect all of that to start in a more meaningful fashion in calendar 2025. But I think, you know, we do expect that the supply-demand dynamic is going to be tight, next year as well. And I think, you know, that's also influencing how customers think about their inventory.
Mm-hmm. Mm-hmm, that makes sense. One sort of last question before we dive into HBM, topic of the day, week, and month. So long-term bit demand growth and how you guys think about that. It's been, I guess, roughly two years since you hosted your Investor Day.
Mm-hmm.
Can you remind us how you're thinking about the through cycle bit growth outlook for both DRAM and NAND? How has it evolved over the past two years? I'm sure multiple moving parts for both DRAM and NAND. What are the key contributors as you think about the through growth through cycle growth profile?
Sure. So in terms of through-cycle bit growth, we continue to see mid-teens type of growth for DRAM, and we continue to see low twenties type of CAGR for NAND. Now, these numbers obviously change depending on what your base year is, and 2023 was a pretty challenging year for the industry from a bit growth perspective. So if you take it from the 2023 base, the numbers can get distorted a bit, but by and large, those are the bit growth numbers we're looking at. Generally speaking, we are looking for data center to grow faster out of the total and some of the device segments, like PCs and smartphones, to be growing slower than those averages.
Of course, the rate and pace at which AI PCs and AI smartphones get adopted, and if there is a change in the amount of time, consumers are going to upgrade their phones or their PCs, that can, you know, noticeably change these numbers. But, but we are taking more of a conservative approach in our planning that, unit volumes and growth in aggregate unit volumes of PCs and smartphones, we still expect to be mild and more of a mixed change, and average capacity change driven by mix is what we are expecting. So those are, those are the numbers. I will say that, the tech transitions on the NAND side are still providing, more than adequate bit growth.
So there isn't really any kind of wafer growth that we are foreseeing on the NAND side, but there is a very significant trade ratio when you think about HBM growth that's taking place. HBM 3:1 trade ratio for DDR5, which means it takes up, for every bit of HBM we have to ship, takes up three bits of DDR5 in terms of supply. So because of that trade ratio, a considerable percentage of the industry's wafers over time are moving to HBM. And because of that, the non-HBM portion of the DRAM wafers is getting squeezed, and consequently, aggregate wafer growth will be needed in DRAM to even meet the mid-teens type of aggregate bit demand in DRAM.
Got it. And just transitioning to HBM, how are you thinking about the multi-year growth rate in HBM specifically? And I think you might have mentioned this, maybe we talked about this last night at the dinner, but percentage of bits going to HBM today, and how do you see that evolving over the next couple of years?
Sure. So last year, HBM was less than 2% of the industry bits. This year, it ought to be somewhere less than 4%, in that range. We do expect the HBM bits to grow in the medium term in the 50% CAGR sort of range. Again, based on the 2023 base year, of course, the industry bit growth rate, say, mid- to high-teens, kind of a range. But I would say that the HBM growth continues to be strong. The demand continues to be extremely robust, and we are seeing a lot of that significant demand because our HBM3E product is an industry-leading product with, you know, some very breakthrough type of power consumption capability that's at least 30% below competitive offerings.
And so, it's a good environment.
Understood. And then your competitive position within the context of the HBM market, I know in overall DRAM, you're at 20%+. In HBM specifically, I think you're in the single digits today. And the reason for you guys being under indexed as of today is that simply a function of past technology bets, or is there something structural to it? And I think you've talked about approaching 20%, 20%+ sometime in calendar 2025. What gives you that conviction that you go from single digits to 20?
Yeah. So Micron had a similar technology called Hybrid Memory Cube, which we had commercialized and gotten to pretty good scale. And when HBM started to gain some traction several years ago, we switched from Hybrid Memory Cube technology to HBM, and HBM2E was the first product we brought to market and commercialized. And we did put that into volume production, and we got a decent share of the HBM2E market. Because we were making that transition from a different technology, we intersected the HBM2E market towards the latter half of the life cycle of that product. And so we determined that instead of being late to HBM3 and 3E and so on, that we needed to leapfrog the industry and get to a new product, and really industry-leading specs.
And that's what we decided to do. So after HBM2E, we did not do HBM3, but went straight to HBM3E in order to get to the leading edge of the industry. And so we are the first company, we believe, to sample HBM3E to our customers. And not only that, but we also had the best specs in the industry, the best performance headroom to even go beyond published specs, but most notably, at least 30% lower power consumption compared to the next alternative. And we feel really good about that because customers have validated, you know, those numbers, and they are amazed by the capabilities and the specs of the product. And now it is about ramping the product into high volume.
So we have spoken about several hundred million dollars of fiscal 2024 revenue with HBM, and going to fiscal 2025 and calendar 2025, getting to multiple billions of revenue in HBM. And that's the trajectory we are on. So now it is about, you know, we have the space, we have the tools and slots, and orders and everything all in place. Now it's the operational aspect of ramping the product with high quality, which is, you know, what we always focus on. And we feel good about our ability to deliver on those goals, which we have outlined as being getting to our HBM share on a quarterly basis sometime during calendar 2025, being similar to our worldwide DRAM supply share, which is in the low 20s.
Understood. In terms of customer traction and customer mix going forward, I think you've publicly shared that you're shipping to the H200 that NVIDIA's ramping as we speak. As you look to grow your business, to your point, from several hundred million to multiple billions over the next you know, 12-18 months, what are your expectations in terms of customer mix? If you can describe customer engagements outside of the big GPU company, that would be helpful.
Sure. So yeah, we have a goal of having a broadly diversified customer base, and we have worked hard to do that across our base business. So today, you won't see, you know, any 10% customer that we really disclose having, you know, in the past. So certainly when it comes to HBM, our goal is to have that diversification there as well. So definitely very strong engagement and partnership with the leading GPU company... We also have really good engagements and really good work and partnerships with all of the HBM large HBM customers around the world. And consequently our focus is to get the qualifications done and ramp with all of those customers as soon as they are able to ramp.
Given the specs of the product, we see extremely strong interest from our customers to bring our product to market, because when they ship their processors without HBM, it actually, you know, has lower operating cost for their end customers due to the power consumption benefit that we bring. So we do expect that we will have a diversified slate of customers going into next calendar year, and with each succeeding quarter, we'll have a better diversification in our revenue mix. So long-term engagements for future roadmap related products, even beyond HBM3, so HBM4, HBM4E, really strong engagement, very exciting place to be.
That's great. HBM pricing and profitability longer term, I know as of today, you and your peers get a nice premium on a price per bit basis in HBM vis-a-vis conventional DRAM. The fear that I think some investors have is, over many years, HBM just becomes another DRAM SKU and it's commoditized. How do you think about that? Does the foundry-like nature of the business and the complexity, right, that comes with HBM production, does that enable you and the industry to maintain, you know, premium margins?
Yeah, indeed, I mean, HBM is a super complex product. It's the most complex product that the memory industry has ever built in its history, and it's not becoming any less complex with time. It's going to continue to have very significant level of complexity compared to the rest of the portfolio. And so we continue to feel like the pricing is going to be high because, you know, for starters, the cost per bit to produce HBM is much higher than, you know, the rest of the portfolio. So the pricing, by definition, then, has to be higher. But we, even if you look at it from a gross margin perspective, our expectation on HBM on a long-term basis is through the cycle, for HBM, gross margin to be accretive to the core business.
Again, because of the product capability, because of the value it brings to our customers, because of the complexity of the product. But even more so, as we think about going from the current version of HBM3E, which is an 8-high stack, in 2025, as we go through the calendar year, it'll transition more and more of the mix to 12-high, and that is even more complex product to do than the 8-high. Future generations of HBM may go to, like, 16-high. So again, that will be a further increase in complexity. And, as we look past HBM4 to, like, HBM4E type of products, there is a desire amongst multiple customers to figure out ways, because every customer has a slightly different approach in terms of how they are creating value.
And when they look at their subsystem, a significant amount of value is in how the processor and the memory work together. So it's not just about, let me just build the best processor I can; it's about, how do I build the best subsystem that I can? And what that means is, a lot of these workloads in AI tend to be memory gated, memory-bound workloads, which means that to really improve the performance of these workloads, you really have to do two important things. One is have more DRAM available for the processor, so the amount of DRAM that can be connected needs to be maximized.
And you also have to significantly increase the bandwidth, memory bandwidth between the processor and the memory, so that the data movement can be done, with that benefit of that higher bandwidth. And of course, all of that has to happen with more and more energy efficiency. So, so I think those are the things that have to be done, but more DRAM and more bandwidth is the critical aspect. And different customers have different ideas about how to create some level of differentiation in HBM usage going forward. And as long as it is the same type of HBM that everyone is using, for example, we build a HBM3E product, of course, it has to be, qualified with different processors, but ultimately it's the same product. But-
Customers are thinking about HBM4E and beyond generations, that they need to find some way to create that differentiation in how HBM works with their processor. And one approach is to put, you know, some logic into the base die of the HBM, and that then makes the HBM product more of a custom product to a customer. And when you make an HBM product into a custom product, then it becomes more of a ASIC- like engagement model, because now you're co-designing the base die with the customer, with customer's logic embedded in it, with your own logic that you have as a DRAM company. And then it changes, you know, the whole engagement model, the core development aspect of it.
You know, much better visibility and coordination, much better ownership on the part of the customer for the orders that they place, because it's a custom product. It's not like you can sell it to somebody else. And so a lot of foundational changes can happen in the industry driven by that. And that's a really exciting trend, I think, for the HBM business. And of course, then it also changes the dynamic versus how you view the business, its margin profile, its stability, versus the rest of the portfolio.
That's a great segue into my next question on LTAs. I think you guys have talked extensively about both volumes and pricing being pretty much locked in, if you will, for 2024, and for the most part, into 2025. The other question or pushback that I often get from investors is, LTAs historically have been sort of a one-way commitment. You're committed to supply, customers can, you know, flexibly push out, cancel, et cetera. How would you characterize the LTAs around HBM that are placed today, vis-à-vis historical LTAs? How are they fundamentally different from past LTAs?
Yeah, I mean, we had started using LTAs several years ago, and I believe, like, Micron had led the way in a lot of these LTAs to really help drive better visibility and better planning for us on the supply side by having, you know, agreements with customers on what type of volume they were going to take. And over the years, you know, it wasn't really meant to be like binding agreements, that they have to take that volume no matter what happens to their business. It was more about, you know, good faith agreements with customers. And yes, there have been certainly a lot of ups and downs since including, you know, in the post-COVID world, a lot of changes on how customers manage inventories and the aftermath of that.
As it relates to HBM and HBM related agreements, they are different from the previous agreements and LTAs that we have done. For one, like you said, previous agreements focused mainly on volume, let's say volume by quarter. HBM agreements have pricing in them as well. So, even though we are still in the first half of calendar 2024, most of calendar 2025 pricing and volume is, is done. And so it's really, a very different place to be. And the terms and conditions in these agreements also are a lot more robust. Now, you know, obviously, we always work with customers, and, it's, it's a very important relationship.
So it's really, something that both sides, generally work together to ensure that, you know, these agreements, are structured in a way that ultimately work out for both sides. But definitely these agreements on the HBM side are different, than the other agreements in, in meaningful ways.
Great. That's very clear. I'm going to pause here, and see if we have any questions from the audience. Here we go.
Morning. Could you talk a little bit more about HBM as it relates to not just the data center, but longer term, presence and potential presence in edge devices, AI phones, iPads, or whatever, that are driven by AI? Because I think there's a lot of conversation about that right now, but trying to get a better handle on how you, you guys are thinking about that sort of aperture over the next two to four years. Thanks.
Sure. In terms of HBM outside of the data center, definitely in places like automotive, there is discussion from customers who have an interest in using HBM, again, because of the significant computational needs of creating more driver-assist capabilities in the car. And so there is a desire to do that. For consumer devices like smartphones and PCs, it's a little bit more difficult to see what any kind of broad-based application for HBM might be in the medium term, because these products are very expensive and they are also, you know, they are very high bandwidth, and they have, you know, high level of power consumption. Because they're created for essentially a-
Environment where there is very significant need for very high bandwidth, and that does create power consumption at a scale that these devices typically are not used to. So, when we think about enabling AI PCs, AI smartphones, a lot is focused on how to increase the amount of DRAM that's addressable in these products, and how to lower the power consumption. And of course, you know, lots of interesting things happening on the power consumption side.
And new architectures, like processing-in- memory type of approaches being contemplated, in order to figure out what kind of approaches in, like, laptops and smartphones could ultimately bring value of higher bandwidth, more DRAM addressability, and enablement of AI workloads on these devices without creating a lot of cost and without creating a lot of power consumption that HBM would create. So those are the approaches that are being worked on. It will take many years for some of these to come to market, so the easiest path is to simply expand the amount of DRAM. And that's, that's what I was talking about earlier, about smartphones getting to 12 GB is really one important place where you can start running, for example, 7 billion parameter models.
Some companies are trying to run, like, 10 billion, 13 billion parameter models, but at least that 7 billion parameter model could be run using a 12 GB smartphone on the DRAM side, and then on laptops, you know, more like 16 GB, but preferably more, 24-32 to do it well.
Hi, film versus reflow, in packaging of DRAM. There are three players, two are using one technology, the third uses the other. Can you talk about Micron experience, in packaging and the pros and cons and your maybe, roadmap for the future, which technology you think about using?
Yeah, I mean, for competitive purposes, I cannot talk a lot about the future roadmap because we have a lot of good innovation happening across the entire stack when it comes to packaging, especially in this world of 2.5D packaging capabilities. You know, we have a lot of good capabilities and good technology on that front. And really, when we approach complex products like HBM, we approach them as integrated, end-to-end system type of problem statement. So we are focused on vertical integration capabilities that Micron has and end-to-end optimization of that stack. And packaging technology and capability is a very important angle, and it's based on that type of a thinking and thought process that we have been able to get to some of these capabilities on HBM that I spoke about on the power consumption side.
So we have a really strong roadmap on the packaging front. We are always trying experiments and different capabilities. We are very deeply engaged with the ecosystem on the supplier side, and on the foundry side, to really be at the leading edge of all of the new capabilities that are being investigated in the industry. And we feel really strongly about our capabilities on that front.
If we can just follow up, I don't doubt that you have a very wide range of options, but, like, why do you choose one versus the other?
A lot, a lot of that is driven by our internal assessment on a whole range of capabilities we are trying to optimize. So we are trying to build products with low cost. We are trying to ensure high performance. We are trying to keep power consumption low. We are trying to ensure that we get high yield. Different approaches have different pros and cons. I mean, there has been a lot of discussion in the media about one approach versus another and it's rarely a black and white thing like that. Every approach has its positives and its drawbacks, and it's a matter of how you optimize the end-to-end capability and take advantages of the positives and are able to work with and minimize the drawbacks.
And so, our decisions, whether it is on the packaging front, on using one approach or another, or on the, you know, even on the front end of the line, when we think about when we use the timing of when we use EUV in our, in our products, et cetera, we deviate from the industry from time to time because of our, strong focus on, you know, the optimization of all of these, broad capabilities, and we sometimes reach different conclusions. But we have shown, you know, pretty repeatedly, that, you know, we are able to come out with these leadership products, based heavily on the choices we have made.
So when we first spoke, for example, of, you know, how we are going to do our HBM technology, and there was considerable questions about, you know, what kind of packaging technology we are using, and here we are with the best product that the industry has seen in HBM. You know, similarly, there were a lot of questions in the past years about the timing of our EUV technology deployment in DRAM, and we have again shown that we have the industry's most advanced DRAM node with 1- beta that does not even use EUV. And we were the first to market by multiple quarters, best performing node in the industry, very low power consumption, very high performance.
A lot of these decisions are made to, you know, really optimize a whole bunch of parameters that, you know, we may sometimes reach different conclusions.
I want to ask a question on the conventional DRAM side, the 90%...
Sure
of bits that still play an important role. In terms of server DRAM and the transition to DDR5, I think it's fair to say, relative to expectations that perhaps you had 18 months ago, 24 months ago, it's been relatively slow. 2023 was a challenging year for your partners on the CPU side. It feels like things have bottomed and hopefully, you know, there's a recovery coming. What are you seeing from a general purpose server perspective in your DDR5 business?
Yeah, the DDR5 transition should continue to gather momentum over the course of the next many months and quarters. There are multiple new server products being introduced by different customers. You have the leading CPU companies, but then you also have CPUs that are designed by our customers themselves. All of these designs use DDR5 because, you know, the bandwidth and the overall capability that DDR5 enables is just at a very different level compared to DDR4, and then obviously higher and higher frequencies of DDR5 as well. Consequently, our focus has been to, you know, really get to more and more of our mix to DDR5 and, of course, to LP5 as well.
Not just in mobile phones, but LP5 is starting to get used in the data center, again, due to the power constraints and because after the processor, the DRAM is the most important driver of power consumption. And so, getting to LP5 in the server is also another important vector over the next few years. And so, yeah, I mean, these are very important trends, and we see a pretty significant increase and escalation in the amount of DDR5 mix, which is, I think, overall, really good for the industry because we have very strong capabilities on that front.
Got it. The last five minutes that we have, I wanted to pivot to the NAND side of your portfolio. Earlier on in the session, you talked about, on the DRAM side, the impact or the benefits from AI are clear. We see it in your HBM business, even in your conventional DRAM business as well. On the enterprise SSD side, 12 months ago, we really hadn't seen that inflection, and it finally seems as though we're starting to see it. So from a demand perspective, you know, what are your expectations in terms of enterprise SSD? You've done a great job in terms of gaining share there. So competitively, you know, what are you seeing in the marketplace today?
Yeah, we feel really good about our data center SSD portfolio. We have been investing in that area for some time. Several years ago, when we were assessing our portfolio, it was clear that data center SSD was an area where we needed to do more. It is the most profitable portion of the NAND portfolio in the industry, and so we had put a lot of work and effort. Again, looking at the data center SSD as a subsystem that needs to be optimized, every aspect of that needs to be optimized based on our internal capabilities, and we had to grow a lot of our internal capabilities to do that. We have gotten to a point where we are seeing the fruits of all of those investments.
And so we're really excited to see the improvements in our share trajectory. 2023 was, you know, obviously a challenging year from a demand perspective from our customers, but we had used that time to really get our capabilities to the next level. And so we are actually seeing pretty significant demand on the data center SSD side. We are moving a lot of our mix to that, and it's a much more robustly profitable arena as part of the NAND portfolio, and we are very optimistic. Our 30 TB high-capacity drives are really phenomenal, and our high-end performance-based products that we have in the portfolio for the data center are also really good. We have a very strong roadmap for the future as well, which we'll be excited to talk more about at the right time.
Got it. Then just NAND broadly, given the current, pricing trajectory and the margin trajectory, you're headed in the right direction, but you're still relatively depressed, vis-à-vis, I would think, internal aspirations, expectations. I think, you know, Sanjay, in the past, has said something along the lines of: We need industry consolidation for things to get better. Is that still the view, or do you think even with the current competitive setup, you can improve upon, you know, profit margins?
Definitely consolidation would be better for the industry. You know, we are not holding our breath for that, however, because our focus is that with whatever the state of the industry, we have to get to a better place, and so we have been very focused on our portfolio of products. For example, you know, I spoke about the data center SSD part, but also, you know, if you look at the mobile portfolio, with UFS 4, we have a phenomenal set of products that have gotten into flagship phones, from various customers, which had not been the case, in the past. And, on the client SSD side, we have QLC products, that perform as well and are virtually indistinguishable from a capabilities and performance perspective versus our competitors' TLC products.
But, you know, obviously, they give us much better capability on the cost structure side. So, we bring that technology differentiation on the client SSD as well. So every aspect of our portfolio, you know, similarly for the automotive industrial side, we have a really strong franchise with differentiation. So our goal is to, you know, since we have a relatively small share on the NAND side, our goal is not try to be everything to everyone, but to really focus on areas where we can, have a share of the industry profit pool that's bigger than, you know, what our revenue or bit share might be.
Okay, great. In the last 30 seconds, anything else that we should have asked or anything you'd like to highlight before we, we close?
No, I think we touched on a lot of the important areas. It's a very exciting time in the memory business, in the way the business is transforming and growing, and we could not be more excited about the growth opportunities for the future and the way the foundational changes are taking shape in our business.
That's great. Thank you so much, and, and best of luck.
Thank you, Toshiya. Appreciate it.