Good morning, everyone. Thank you all for joining the RBC TIMT Conference. Thanks, Mark, and thanks, Scott. Also, we have Satya from the IR team at Micron in the audience. We have roughly about 45 minutes or so. Mark's going to start off with a few comments, then we'll get into Q&A. We should have enough time for any questions from the audience in the end. With that, let me hand it over to Mark.
Okay, good morning, everybody. Srini, thank you for having us today.
Thank you.
I'll start with Safe Harbor. We'll be making forward-looking statements. Those statements have risks and uncertainties associated with them. I refer you to the risk factors disclosed in our public filings, including our most recent 10-K. Things are good at Micron right now. Business conditions have continued to improve since our September earnings call. We're executing well on all fronts: technology, which you'll hear about today from Scott, and products from Scott as well. Operationally, we're doing very well. Data center demand, which includes a lot of our highest value products, is especially strong. All other markets are also healthy. We expect growing AI demand to drive a multi-year data center buildout globally. We've seen much more supply-demand tightness than we expected at the time of our earnings call comments.
That is allowing us to drive robust pricing trends, and we're doing that across markets. Now, we project tightness to continue beyond 2026. That is due to both supply and demand factors. Consequently, customers have approached us about entering into multi-year contracts. Now, on contracts, as we noted in our fourth quarter earnings call, we said we expected to close HBM negotiations in the coming months. Today, we are pleased to announce that our HBM supply is fully contracted for calendar 2026. That is both for HBM3E and HBM4. On supply, in our assessment, industry supply response is limited by clean room space availability in the near term to address all the demand opportunities that we have in front of us. Inventories are very lean. DRAM, as we've talked about before, is below our target levels. NAND is improving.
By the end of the year, we expect it to be near target levels. Near term, we are growing supply through the ramp of our node transitions of one beta and one gamma on DRAM through existing clean room capacity. We are also engaging in a lot of productivity gains and optimization of that capacity. Also, construction is ongoing for new clean room capacity for future supply requirements. Ultimately, our success is built on a foundation of strong execution that leverages our leadership, technology, and product position. That is the focus today. I am very pleased to be joined today by Scott DeBoer, Micron CTO, where he can provide his views on Micron's technology and product position. Scott?
Great. Thanks, Mark. Thanks for having us here today. I'm really mostly going to focus on our position and where our products sit today. Micron's in the strongest position in history in terms of our technology and products in the history of the company. We have product leadership in both DRAM and NAND. That's really enabled by a core innovation engine that we have at Micron, a global technology team, and a strong partnership with our manufacturing operations that has led to both four consistent nodes of technology leadership on the DRAM side and the NAND side, but also sequentially improving yield ramps on every one of those nodes through the partnership with our technology teams and our manufacturing teams.
It is really something that is a strength of Micron is our ability to generate new technology in a very efficient and quick manner, and then to ramp it fast into volume manufacturing. We have enabled a number of nodes, as I mentioned. Right now, we have our proven one beta technology and high volume technology ramped already. We also have our one gamma node, which is now at mature yield and ramped on some products, and it is continuing to ramp on more products through the rest of the next two years. Really strong position with those two nodes, both in mature yields and ramping faster. Our next nodes, our one delta and our one epsilon nodes, are both planar DRAM nodes.
Those are ramping over the next several years and will be the focus both of our high-density products, our LP memory and our DDR memory, as well as eventually being part of the HBM landscape. Longer term, we are focused on true 3D DRAM beyond those planar nodes. Just like we've done in the past with other technologies, our focus is on bringing that technology to market at the right time when we believe it's cost-effective and brings a competitive advantage. I think we're in a very strong position with our 3D technology on the DRAM side. We look forward to bringing that in at the right time. We haven't picked an exact year yet because it's going to depend on how technology evolves on the planar side and when we reach both cost and performance crossovers.
On the DRAM product side, we've had a very strong year with our, in particular, with our LP and our DDR products. Both of those now on our one beta node and our one gamma node are ramping. They're a bit different in how you think of them now. Of course, both of them have significant aspects that are commodity. The fact that we have, in that sense, leading-edge technology, power, and performance that differentiate themselves in terms of just the pure performance in the market. We also, though, have significant parts of those that are more differentiated. Even those products today for us, we don't think of as products without differentiation. That's visible in the fact that our LP DDR5 DRAM was the leading data center LP in the market and sole sourced for a big part of last year.
On the rest of the product front, we've focused significantly from a customer engagement point of view. Our customer partnerships over the past year or two years have been a strong focus because all these products, whether it's the LP that was sole sourced that I mentioned or our HBM product that I'll talk more about in a couple of seconds, are really results of long-term customer partnerships to build in features that our customers want and foresee and take multiple years to build in. That's really the story of our success on our DRAM products and our NAND products, and for sure on our HBM products. Following our HBM3E, which we've talked about publicly for several quarters, and really was a ramp from zero to a substantial market share position based on excellent execution, business strategy, and the best product in the market.
We're now in a position with an HBM4 product that we also think will be the standard setter for the future benchmark for HBM4 performance. A couple of things on the HBM front, and I'm sure there'll be some questions on HBM4. We have started with a product that is internally developed and designed to run all on Micron silicon. The strategy around that was to focus on utilizing the core technology capability that we knew we needed to have a much higher performance HBM4 than the market was calling for a year or so ago.
By keeping that internal, we were able to utilize Micron's strength, that metalization technology, the right kind of CMOS for memory in an HBM to optimize the product, and the ability to design a much higher performance HBM4 product than the market was asking for a year ago. It is aligned or still above what the capability is that the customers are asking for today. All that led us to design a product that is going to demonstrate very high yield above 11 Gb per second and basically was designed to operate in that kind of space. Even though the market was calling for something almost half of that, we anticipated that we needed to be in a stronger position.
That really did lead to our strategy for staying internal on the CMOS, for designing with Micron's metalization schemes, which are much more aligned to optimization for memory, and to position us with a product that is going to set the benchmark both for performance without any design revisions, and also to have the leading power that we've had and enjoyed on HBM3E continuing into the HBM4 space. Last, on the NAND side, we've had, again, multiple generations of product or technology leadership on NAND. With our Gen 9 NAND right now in high volume manufacturing and ramping across new products as the leading node in the industry, we have really positioned ourselves with now a focus on data center, SSD in particular. As you've seen, our growth on data center SSDs has been very substantial over the past year.
We positioned ourselves with high-density data center SSDs based on this Gen 9 technology and 4-bit per cell or QLC technology leading the industry to really build our product portfolio on the NAND side at a different level than we've been before. Over the past year, we've had significant success on PCIe Gen 5 high-density data center drives. Now we're leading the industry with the first products coming out on PCIe Gen 6. This is the first time Micron's led a protocol change like that with PCIe Gen 6 data center SSD drives. Those are getting great uptake and customer interest. This is really the first time that Micron's been there in that kind of position.
I think my message to start with is the technology and products position is exceptional right now at Micron and really well positioned for the exciting opportunity in front of us. Certainly focused on data center, but also across mobile, automotive, and other spaces where our products are recognized as leading.
Great. Thanks for those comments, Mark and Scott. Scott, definitely, I do have some questions for you on the technology side. But I'm going to start off with what Mark just, you know, Mark's update just now. I guess, you know, this is what, 10th quarter of the upcycle in DRAM, Mark. Historically, these cycles have lasted anywhere from 8- 10 quarters. I guess for an average memory investor, this looks like we're closer to the peak. But based on what you just said, you seem to have visibility that the industry tightness will probably continue through all of next year, which I think makes sense given how strong the GenAI has been. Talk to us about, you know, when you talk about allocations, especially customers asking for longer-term contracts, especially on the commodity DRAM side. How are we approaching your allocation?
When you talk about longer-term contracts on the DDR side, are customers willing to commit to both volume and pricing, or are we just talking volume commitments here?
Yeah. Let me step back. The industry and Micron took decisive actions during the downturn to right-size the capacity in the industry. We were very disciplined in our CapEx investment, managed our inventories very carefully, and delayed node transitions. The wafer capacity at Micron and the industry came down. Now we have a situation where the AI demand is very strong. We'll talk today about reasons that AI drives higher performance product requirements. You have one of those in particular, high bandwidth memory, which consumes a lot of silicon. Now that the demand has kicked in on AI and has broadened out to other parts of the market, the market has gotten very tight, especially on DRAM, but it is improving pretty dramatically on NAND.
We are in 2020, this past year, we worked our inventory levels down, and supply was provided by inventories that had been built up in the downturn. This year, we are focused on node transitions to give us the supply we need. Overall, the supply that is going to come into the market, we believe in now and beyond 2026, is going to be inadequate to supply all the opportunities in the market. We have greenfield capacity that is needed for ourselves and the industry, which you see coming on in 2027 and beyond. We are in discussions with customers. We have been approached about multi-year agreements. Supply assurance is important for them. It is good for our visibility on investing the capital and technology that we need to support our customers. Those discussions are underway. We are not to disclose price and volume commitments, but they are multi-year.
That is an indication of the importance of memory and the importance of us providing supply assurance. What I will say is we're extremely disciplined in our investments, be it what we're choosing to focus on in our technology and product development, and then importantly, the rate and pace of our capital spend. We, of course, take in demand signals, and we very regularly process those and run those through our capital planning models. We use external sources. We use a number of methods to get a good read on what sort of supply we need to bring on and when.
Ultimately, for Micron, as we've said, we want to achieve stable bit share and then deploy our bits to the more valuable parts of the market, which Scott's team and Cemit's and the broader Micron effort around getting the right technology and the right products and executing very well with Manish's team. I would just say that given the tightness in the market, given the duration of that tightness, which we now believe will be beyond 2026, and given the customer interest in long-term or multi-year agreements, our CapEx that we had provided, our CapEx number that we had provided, which was we had provided a run rate of about $18 billion for the year, there's going to be pressure on that to come up this year.
I would expect at our earnings call, which is about a month from now, we would probably bring that up and update you at that time.
Got it. Your comment about customers asking for multi-year contracts, are you seeing that across all end markets, or is it specific to data center where the demand seems the strongest?
We're seeing it broadly.
Okay. Got it. Mark, anytime pricing gets to these levels, we hear concerns about, especially in the consumer markets, the bill of materials, the despacking. Are you seeing any of that, any of the behavior from customers as to, "Okay, I have 12 GB in my phone. Either I have to raise prices or scale back on the memory content"? Do you see that? I mean, historically, have you seen that in terms of despacking, or do you think it's something we should expect in this cycle given how strong the pricing is?
I can't say that we've seen that yet. I mean, I can't say that it's an elastic thing. I mean, the performance of the devices, and especially as we get into greater applications with AI, I think it emphasizes the importance of DRAM. In the case of our focused data center portfolio on NAND, the high performance aspects of NAND. As to your question on consumer devices, we see DRAM content increasing. We think that's a trend that will continue.
Got it. Got it. The other question we get this time in the cycle is about peak margins. If I look at what you report the last quarter, what you guided to, 51.5% gross margins, I think back in the 2018 cycle, you peaked at 60% plus. It is difficult to compare cycle to cycle. Given that HBM is a bigger portion of your mix, I mean, it was zero essentially in 2018. Is there any reason artificially for us to think that margins can go back to those previous peaks? I would argue that they should actually be better because of the HBM mix. How should we think about it, given the tightness comments that you made?
No, I think generally the drivers, you're thinking about the right way. We're not going to provide guidance on margins on second quarter or beyond. We have said that these trends will be positive. Today, I talked about how tight the market is and how that's driving pricing. We are also operating very well. Our cost performance is good. Our mix of products, as you point out, has really been a wonderful story for Micron. I mean, if you look at, as we said in the earnings call, if you look at the high capacity DIMMs, the HBM, and the low power DRAM in server, that was about $10 billion in our 2025. A material part of our business, which had been back in 2022, 2023, it was near zero.
You add on top of that the high performance SSDs, you're in low teens billion dollars of business for just that group of four. It's premium products. It's better margin generally. As we look out, that will grow on an absolute basis, of course, substantially. It'll grow on a percent potentially of the mix. These things will, these mix effects, we believe, given our positioning on technology on products, we believe this mix effect will continue to help us. We've said before that we think that the second quarter can be stronger than the first quarter on margin. Even with the first quarter conditions improving, we do still believe that the second quarter will produce better margins than the first quarter.
That's great. Switching gears to HBM, we know that the demand is very strong. You just told us you're pretty much sold out for next year. There has been this persistent debate about your roadmap. Scott, you talked about you having a very strong product portfolio. Maybe help us understand. In HBM3E, you said you had a product leadership. Do you see that leadership, whether it's performance, power, I mean, transferring into HBM4 as well as you look out to the roadmap? Maybe for Mark, as we look out to the HBM shipments over the next, I guess, year or so, when should we expect the crossover to happen between HBM4 and HBM3? Do you think Micron will follow whenever the industry crossover happens? Should we expect Micron to all kind of pretty much follow that same timeline?
Okay.
We do expect our performance advantage to continue on HBM4. From the very first concepts of how we created a better HBM3 product, we really looked forward on HBM4 to ensuring that it was a sustainable advantage that we had. We have a combination of the process technology that we use to make our HBM, our advanced metallization sequence that we use on both the base die and on the DRAM chip that is different than others. Also, some very key IP on our design capability. We do design our HBM differently than the rest of the industry. Of course, those things eventually normalize.
We have a roadmap of different design elements that we bring in on HBM4 and then we'll again bring in differently on HBM4E that we believe will continue to enhance our capability both from a power and a performance point of view. On HBM4 in particular, the challenge really was looking ahead and making a decision to design the product at a much higher capability level than the industry was calling for a year or more ago when this product was in design. We'll do the same thing on the future generations. For HBM4 specifically, we do believe we'll have the industry's best performance on that product out of the chute at high yields. We think that the power advantage, of course, has to be proven.
Like HBM3E, early indications from our customers are that we're in a very solid position now.
Yeah. There seems to be this concern that because you're doing your base die in-house, you're not using TSMC, at least for this generation, or because you're still on one beta, that somehow for some reason you have a disadvantage. How do you kind of address those concerns?
We're confident in the choices we made. To me, it feels a little bit like where I was sitting four years ago when I was describing why it made no sense to put five EUV levels in a DRAM chip at that time because the technology wasn't ready. It would be an inherent problem for anybody who did. It would slow down your technology roadmap. At that time, we had a lot of public news about why Micron's in trouble because we're not putting a bunch of EUV into our process technology back then. Some of that is just living through it again. In this case, I'm even more confident that the decision on the base die was absolutely the right one. Optimizing a logic chip for HBM for memory capability is actually quite different than optimizing a logic die for ASICs.
Of course, the foundries will catch up and they will figure out the things that are most important in terms of power delivery and low voltage operation specific to signaling for memory capability. There is a lot that has to happen there. We have announced we are using TSMC for HBM4E. We have been partnering with them over a period of time to make sure when we do introduce that, it is optimized for building HBM and not just kind of a follow-on to the ASICs designs that have been needed for a long time. I think that in particular, the base die is going to turn out to be a significant advantage. Yes, there is news, but a lot of that is generated in the little self-interested in a specific country.
As I mentioned, absolutely, the big part of the news, which is we need to redesign and we have some technology gap. We have the best products right now. We've sampled the highest speeds on HBM4 in the industry. We've given our customers HBM samples over 11 Gb per second already. We are actually the only company that can test those right now because test capability at those speeds is also important. Part of our design is a built-in self-test regime that allows us to actually test and validate our material up at that speed.
If anything, I'm more bullish on our position technology-wise, independent of all the articles that I read also on our HBM4 position, even than we were when we were looking at our HBM3E, and we felt like our product was going to be good, but we never ramped it before. In reality, in the last 18 months, we've gone from zero to the market share we've talked about in terms of HBM. Our customers know how good our product is on HBM3E. They have more confidence in us now relative to our ability to execute. When we put a technology out on HBM4 and we talk about it, we have an even greater level of confidence now that we'll be able to execute to that.
Yeah. That is, I think, a very detailed answer. Thanks for that. I just want to go back to Mark. Given your comment that you are sold out for next year and that also includes HBM4, should we assume that all the qualifications are done and we are ready to ship soon? Is that what, I guess, your comment suggests? Or qualifications still need to be, I guess, completed?
Yeah. I don't think the systems aren't ready for HBM4 yet. Right? Qualification, HBM is different in how we work with our customers. It's not just that our part works. It has to work in their system. Truly, we go through a number of milestones. We've hit all those so far in HBM4. The next milestones do depend on the system optimization that we partner with each of our customers to make sure that that product—and we don't call it qualified until it's actually in their system and working. Since most of those systems are not yet available, nobody can actually be qualified yet.
Maybe just to build on this point, tying it back to our capital expenditures. This deep engagement with customers and this confidence that we have in our product obviously helps us in gaining confidence in the investments we're making for DRAM capacity. Our DRAM capacity now is heavily impacted or influenced by our view on HBM. We have a very good view on where we stand on HBM, the increased use of HBM in systems, and how that's going to determine supply requirements and what sort of investments we need to make. I had mentioned earlier that based on this HBM and also a number of other products and the overall tightness in the market that our CapEx, we're going to bring that up likely in the earnings call.
That does not necessarily mean that the capital intensity is increasing because as defined by as a percent of sales, that may not. We will work through all that. The point is the confidence that we have to invest on the basis of our technology and product position.
Great. Going back to my previous question about when do you anticipate, I guess, that industry crossover from HBM4 crossing over from 3E? When it crosses over, do you think there's still going to be some demand for 3E? Or is this something, I guess, just like what we saw with HBM3 today? How do you see that playing out?
We start shipping in second quarter, and the systems will move, and it will ramp in the second half. I don't think we've given a specific crossover point. It is a product that, unlike standard products, we're not expecting a long tail in these products. I mean, the technology moves very quickly, which is important that we sustain the capability we have and the deep engagement with customers. We are just careful about our planning. We would expect it to move to HBM4 and largely be that. Of course, there'll still be some HBM3E for some time. We are on the 4 and then the 4E.
Sure. And then last quarter, you kind of you've been gaining share for the last several quarters. I think you pretty much got to your target, which is your DRAM share. As we look forward, it looks like potentially there's opportunity for you to even pick up more share. I don't know if you have the capacity or supply. How do you think about market share as you look out to the next, I guess, several quarters? Also, do you anticipate your HBM4 market share to be somewhat similar to what you had in 3? Or do you see opportunity to even improve on that?
I would just say that HBM for us, as we've talked about in third quarter, we believe we achieved what we said we would do and achieve a share equivalent to our DRAM share. At this point, it's a large product line that we treat like other large product lines. We look at what our customer requirements are and the bit trade-offs amongst all our businesses, and we make those decisions. Clearly, given the trade ratio, deploying more of our capacity to HBM means fewer bits for some other markets. These are the trades we have to make. Again, we make those all the time with our products.
Got it. Got it. In terms of the profitability itself, it's been a great product for you, HBM. You've been improving that cost structure and margin structure also. I think just based on what you reported in your segment gross margins, we could see that there's consistent improvement in gross margins as well. As we go to HBM4, do you expect the profitability to be at this level? Do you see that maybe getting even better? Because it's early ramps, are there any, I guess, issues that we need to be aware of in the early days as we ramp this new product?
I mean, Scott can talk more about yield. It's yielding well at this state. We're, again, confident, positive about that. I would say that HBM is providing a lot of value to the system. Each generation has gotten more complex and is providing more value. It is very difficult to build. We have the highest performance product. Not only would we expect HBM to be, through cycle, generally more profitable than the rest of the business, we would be expecting to get value for each generation that is providing more value to the system.
Maybe Scott, you can talk about what drives that cost increase as we go from HBM3E to 4. What are some of the components?
Yeah. I mean, Mark talked about the trade ratios, and those are always an important piece. The trade ratio is how many wafers it takes us to build HBM cube bits versus DRAM. One of the fundamental pieces is defined for all of us by spec is the die size on the HBM. From that point of view, the HBM4 die size will drive some cost structure increase relative just pure math on silicon size that we'll have to overcome in other ways. Now, when we look and we've talked about this ramp on HBM3E, when we look at the evolution of Micron's capability here, we did come from not even having a manufacturing entity on HBM to the position we're at with market share today. When you go from zero up to that significant market share, you obviously become more efficient.
You learn how to manufacture better and more cost-effectively. All through this last year, our costs have continued to get better on our HBM manufacturing because of that, because of maturity of manufacturing and our design for manufacturing continues to get better. When we went from HBM3E 8 high to mature yield, and then we went to 12 high, we did the 12 high faster, and then we did the 8 high by a significant amount. Our expectation based on the status we're at right now on HBM4 is we'll again substantially improve the timing to mature yield for HBM4 versus what we have done previously on HBM3E. Based on that, we will keep improving our cost structure on HBM4, and that'll be an offset to some of the inherent pieces around die size and other things.
We are very much focused on consistency of manufacturing flow between HBM3E and HBM4 and making sure wherever we can utilize the learning and the process improvements and the cost structure of HBM3E and translate that over to HBM4, we are absolutely taking advantage of that. Overall, I think our expectation is we continue to improve and have a very good cost structure on HBM4.
Got it. You expect that to continue as we go into HBM4E as well, I guess?
Yeah. I mean, HBM4E is a little more complicated because we have the mixture of more custom products and standard products. It is, again, a new variant. Of course, we will optimize our manufacturing costs around yield ramp and reutilization there also. As I mentioned, we are bringing in TSMC as the base die on that. That changes the cost structure by itself of the HBM cube. Ultimately, it will be a different kind of optimization with custom parts versus standardized.
Right. Got it. Got it. Scott, I want to switch gears a little bit to the content side of the HBM story. I mean, NVIDIA gives us pretty good visibility into their roadmap, so we know what to expect from a content per GPU standpoint. AMD also just recently gave us some additional details. At the same time as GenAI workloads evolve from pre-training into, I guess, inferencing, we hear new terminology like RAG and vector databases, et cetera. I am just curious. There seems to be a lot of interest for low-power DRAM outside of HBM as well. In your cloud business, I think HBM is roughly half of what you reported, and the remaining is probably low power and some other high-capacity DIMM revenue as well.
Talk to us about how do you see that playing out as inferencing becomes a bigger and bigger portion of the workloads? Are we seeing, I guess, a new class of memory that we need? Or I'd just love to hear your thoughts on that.
Sure. I think that it'll play out over the next few years. I think the key thing to come back to is there is general and growing recognition of the importance of memory products in enabling all kinds of systems, AI systems in the data center, at the edge. Memory is much more front and center of system architects' thought pattern today than it ever has been in history. It will evolve to be more optimized maybe for edge applications, for robotics situations, even for automotive, and certainly for data center inferencing versus training. The thing that we look at is there's no question it's going to be memory-focused on how you solve those problems across that space. Absolutely, LP is a key part of enabling that, just like HBM.
We see growth both in terms of HBM content and LP content going forward to support this. We need to be positioned for subtle changes in that, and that is what we are focused on. A piece of that is making sure our LP products are differentiated just like our HBM products are. I think that is part of what we have demonstrated over the past year. We will continue working with customers to make sure we have absolutely the best low-power DRAM products for them, whether it is LP6 or some modifications of that in the future, and certainly LP5 in the near term, making sure those products are solid. Overall, we think we can shift around to meet the different applications, and there is going to be growth across all of them.
Got it. Got it.
I think maybe also on this issue of inferencing, the longer context windows, the larger models, the deep reasoning, I mean, this is all also bleeding over into the need for higher performance SSDs. As you know, we've focused our NAND business on that. We're seeing that drive requirements for faster performance SSDs as we need more access to the data faster, and then the higher capacity SSDs, just so that more of that data is warm and can be used in the inferencing process.
Right. Actually, that's a great segue to my next question, Mark. If you look at the SSD market, you have done quite well in this market. Not too long ago, we were talking about excess inventory in this market. Suddenly, demand has picked up quite a bit. There are a couple of debates. One is that, oh, the AI evolution of the workloads is what's driving this demand. The other argument is that, oh, it's not necessarily because of AI, but it's because of cyclical reasons and also the tightness on the HDD side. I'd love to hear your thoughts on what you think is driving the near-term demand and how sustainable this is.
We think it's sustainable for the reasons mentioned. There are a number of factors that inferencing is going to drive access to this vast pool of data that SSDs can perform a better job of accessing that data. We think that's the fundamental driver. Over time, we see right now, as we sit here today, our inventory levels in NAND, we expect to be near target by the end of our fiscal 2026. We're going to see those inventory levels continue to decline, particularly in the back half. That market, just like DRAM, the pricing has gone up substantially. I'd say the principal driver is use cases, and that's good because those are durable drivers. Certainly, in the short term here, the hard drive market is tight, so there's been probably some demand related to that. That is a factor long term.
I think the continued replacement of HDD by SSD is a long-term value proposition that we have to offer. In the near term, I think it's driven more by use case.
Got it. We have a few more minutes left. I want to see if there are any questions from the audience. We have a couple of mics. Anyone?
Okay. Maybe just a quick comment here while we have some time, and then if there are no questions. I did want to draw folks' attention to the cash flow in the business is improving. We talked about a significant increase in cash flow, and we're seeing that. We have actively worked down our debt. Technology and products and reinvestment in the business are high priorities, but the balance sheet is also a high priority. Our debt has gone down from what was approaching $16 billion; it's gone down below $12 billion at this point. We took out some debt through the course of this quarter. We expect to be net cash here in the near term.
I also want to point out that as far as it relates to capital return, we were authorized under the CHIPS definitive agreement to do a repurchase of $300 million, and we completed that in this quarter as well.
This quarter. Got it. Got it. Maybe a couple more questions. One for Scott. Scott, you kind of mentioned Moore's Law and potential 3D structures as we look out to the next few years. How do you see—I mean, I guess how much runway do you think does the DRAM industry have in terms of kind of using the 2D structures? When do you think we need to kind of consider going to 3D structures? As we go through this transition, do you think it's going to be somewhat similar to what we saw with the NAND industry, or is it going to be different? Also maybe potentially you can comment on the impact on capital intensity as we go through that transition.
Yeah. There is a lot there. I think two things. DRAM is a very 3D-oriented structure to start with. We do not refer to it as Moore's Law on the DRAM side. Moore's Law is dead on the logic side. Scaling on the DRAM side is very much alive and has a really solid path. It is just a question of when we do certain things over the next decade, but there is no question that DRAM technology scales through an extended period of time and provides cost structure benefit as well as performance benefit. The only question is, when do we make changes in architecture like 3D or like other architectural changes to enable what we call planar, but it is very much a 3D type of DRAM?
In fact, lots of times in the press, some of the planar architectures that we would call planar are referred to as 3D also. There will be a number of transitions over the next several years, and most likely different companies will do them at different times. They'll largely be invisible, except in terms of the performance of the product. Ultimately, it will be done when they provide a certain cost benefit. Relative to my true 3D DRAM comment, which is a drastic change in the memory architecture, I think different companies may approach this in different ways, but our approach is going to be when it makes sense versus what we can do next with planar. When can we bring in a significant cost advantage?
You're not going to see a drastic bit increase per wafer like you did on 3D NAND when we switched. I mean, that was foundationally different in that it just totally changed the economics of how you built it. 3D DRAM, the way we see it, will provide the same kind of cost-down reductions, node-to-node, as the prior planar nodes. It will be focused on matching array performance and increasing speed and power, reducing power, but improving the performance on those. I think those will be more incremental or more in line with what we've done over the past several generations than anything like 3D NAND was. Actually, our 3D DRAM looks really good right now, and we will be in a position to implement it at the right time. It's just right now is not the right time.
Got it. Mark, on capital intensity, should we anticipate any changes as we go from, I guess, 2D to 3D?
I don't know.
It's another piece. It is a more complicated flow. Our objective is to keep it consistent on tooling more so than 3D NAND was when we switched. I think we can do that largely because the DRAM flow is already very three-dimensional and really complex. We will have much better utilization of tools and space on a 3D conversion than we did in NAND, where 3D NAND conversion really was a tear-up of the whole fab and starting over. I think DRAM would be, again, more incremental, and we will bring it in at a time that is not as disruptive. I also think it will come in not all at once. We will have a planar path. Some of our products most likely will stay on planar, and we will bring in certain products on 3D that benefit from maybe a higher performance capability.
The capital transition also will be more muted in terms of we won't be changing out our entire supply base.
Makes sense. This is going to be my last question for both of you. You are at the forefront of enabling GenAI. I'm sure you're also implementing internally within your company. I just want to hear your thoughts on what you're doing at Micron as far as AI is concerned. What sort of productivity improvements are you seeing and general views on AI at the enterprise level?
Should I go quick because Mark has more to say on this one now, because he is our corporate champion on this? From a product development point of view, this is an area of intense focus for us and big opportunity in terms of the design space, how fast we can get new designs to market, how thoroughly we can verify those designs and make sure that they're right the first time so we have more first silicon success on those designs. We can shift our engineering exercises to higher value tasks driving innovation as opposed to some of the more incremental things. I think it's going to be a massive productivity improvement, in particular on the design side, but also on the fab process side. The technology is very complex, and the yield ramps are 100% dependent on how fast we can reduce variation.
That is one of the things that AI can really help us with, identifying sources of variation, finding the needle in the haystack that lets us reduce the process variation and get those yields up faster. From a memory development and manufacturing point of view, it is a massive impact.
Yeah. It is a really exciting time for Micron. Not only are we on a great trajectory for the business and AI's impact to our business, and we heard about that today, it is really exciting to be inside Micron. There is great enthusiasm for GenAI and broader AI within Micron. Now, fortunately, we stood up a smart manufacturing group which does a lot of AI work, and that has been in place since 2019. Work was done over a decade ago, even before that group was formally established. There is a history of applying AI technology in the company. With the emergence of GenAI, it is a priority internally to rework our workflows. We believe, if we look around the tech space and do our benchmarking, some of the highest adoption rates for GenAI in our population of any company.
We are applying it in coding and seeing 30-40% productivity there. We are measuring many, many other areas within the company. We have added it to our company compensation goals. We have multiple targets related to GenAI adoption. We think this is a great technology for us to help manage the growth and complexity we have in our business. We believe, given our history of innovation and diverse culture, that we can very quickly adopt this and turn it into a competitive advantage for Micron.
Great. Thank you, Mark. Thank you, Scott. That's all the time we have. Thanks, everyone, for joining.