Great. Good morning, everybody. My name is John Vinh. I cover semis here at KeyBanc Capital Markets. We're pleased this morning to have Micron with us. We've got Sumit Sadana, EVP and Chief Business Officer, and we've got Satya Kumar, Corporate Vice President of Investor Relations and Treasury. I think Satya is going to kick us off with some disclosures.
Yeah, I just wanted to make sure you're aware that we may make some forward-looking statements. Our actual results could differ from what we say. I encourage you to look at our SEC filings to look at our latest disclosures.
In tech leadership forum tradition, I think it's been five years in a row you guys had a pre-announcement this morning, this time on the positive side, which is great. Maybe Sumit, you could just walk us through the pre-announcement and talk about what were the drivers of upside in the quarter for you.
Definitely. Good morning and thank you for having us, John. In terms of our guidance update this morning, really excited that Micron is performing really well, executing really well across the board. Our prior revenue guidance for fiscal Q4 was $10.7 billion at the midpoint of the guidance range. We updated that to $11.2 billion this morning, ±$100 million. Gross margin was 42% at the midpoint in the prior guidance. It's now 44.5%± half a percentage point. EPS non-GAAP was $2.50 at the midpoint, and it's now $2.85, ± $0.07. If we look at the drivers, at the time of our prior guidance, the plan that we had, our volume shipments are largely consistent with that plan. This is primarily about pricing, and pricing has been strong across end markets.
We look at all of our different end markets around the world. The pricing trends have been robust, and we have had great success in being able to push that pricing up across those end markets. You're seeing the outcome of that in our reported results. Really strong performance. Our portfolio continues to do extremely well in terms of our overall mix improvement that we have been on a path to drive for the last few years. That's a really exciting part for all of us at Micron . You've already seen that on the process technology side, we have had really strong execution with four nodes in a row in DRAM, first in first-to-market, and several nodes on the NAND side as well. If you look at our product portfolio, the momentum on the product portfolio has been really stellar.
We have either very robust share or a trajectory of share gains in every single high margin portion of our business. That's really underpinning a lot of our financial performance over the course of the last several quarters. We feel really well positioned going into fiscal 2026 and calendar 2026. We're excited about our future prospects looking out multiple years. I just paused there.
Great. Thanks, Sumit. Maybe just a couple follow-ups here. One, you called out strength in some key end markets. I think we've observed from an end market perspective that there's been some pulling activity across certain end markets such as PC, smartphones, and traditional servers. What were the end markets where you saw strength? Do you think that some of this upside and strength that you're seeing were due to some of these tariff-related points?
Yeah, I mean, there may be some customers in certain end markets that are concerned about tariffs and taking some action regarding pull-ins. I think the important thing I wanted to emphasize is the plan that we had at the time of our guidance that we had provided during our last earnings call. Our shipments are largely consistent with that plan. This is, as you can see in the gross margin improvement from guidance to our current update, very significantly driven by pricing improvements.
Okay. Any particular end markets you want to call out where you saw better than expected pricing?
The pricing has been positive, on a positive trend across the end markets. The demand has been very strong on the AI and data center side. AI demand you're already seeing from all of the CapEx announcements that our hyperscale customers are making. If the top five companies that we track for future growth of CapEx, this calendar year 2025, we expect just those five companies to be over $400 billion of CapEx spending. A large chunk of that is going towards infrastructure, servers, and data centers. The demand on the AI side has been very strong. You're also seeing several of these companies increasing their forecast for CapEx looking out several months and quarters. It's a strong trajectory.
That, of course, combined with the fact that HBM strength has been robust and all of these trade ratios we have spoken about for a while of HBM wafers, approximately three to one HBM wafers to DDR5, is creating a supply squeeze for the non-HBM portion of the market. The demand from all the other sectors of the market is fairly steady. That supply squeeze is helping create a very healthy environment for us to be able to push the pricing higher.
Great. I do want to dive in, obviously, into your AI positioning in a minute. I had one more just follow-up question I'm getting a lot of questions on. Obviously, we know there's been shortages in DDR4 and LPDDR4. To what extent did that also contribute to your strong results here in the near term?
Yeah, the industry has gone through, different players have announced end of life of DDR4. We were one of the first ones to announce end of life of DDR4 early this calendar year. We had told our customers that over the course of the next few quarters, we will be ending our shipments in DDR4. As other companies later announced their plans to EOL end of life DDR4, the DDR4 market has gone into a significant shortage. That has driven up pricing in that market quite substantially. I don't think that is really a driver of this upside for us in any material way because DDR4 is a very small part of our revenue. LP4 and DDR4 is like a high single-digit % of our revenue, give or take. The DDR4 portion is only like a low single-digit % of our revenue.
If I look at the contribution of DDR4 itself to these upsides, it's a very small contribution to the overall upside. The upside is largely coming from all of the data center markets, PC and mobile markets. Those are where broad DDR5, LP5, those are the markets that are driving the upside from a pricing perspective.
Great. Maybe switching to AI, you know, obviously you've got an increasingly strong positioning within the AI infrastructure market. Obviously, you're doing extremely well with HBM. If you think about kind of the Grace platform, your LPDDR5 is being used there. Maybe just help us understand what your vision and positioning is within the AI infrastructure market and how you see that trending for you over the next several years.
Sure. Just taking a step back in terms of AI, we are still quite early in the overall AI revolution that we see around us. We are still in the world of what you can think of as ANI or artificial narrow intelligence. A few years from now, we will get to AGI, which is artificial general intelligence. Several years from there, we expect we will get to ASI or artificial superintelligence. ASI can be just thought of as all of the wisdom of humanity in a data center or a country of geniuses in a data center, that sort of thing. It is a very revolutionary time for the next decade. We expect very significant transformative capabilities coming out of these data center deployments of artificial intelligence. Right now, it is very data center-centric, all of this growth in AI.
I have high confidence that these technologies will proliferate to the edge. All of the devices on the edge are going to get more intelligent with on-device intelligence. Not just these devices being used as a portal to connect to a data center which has all of this intelligence capability, but on-device processing is going to become important. It will start with the smartphone. I think, as you see, some compelling user applications leveraging AI rolling out over the next year and then accelerating from there over the course of the next two, three years, you'll see some significant upgrades happening. Since most consumers are keeping their smartphones for a number of years, these platforms have to get to a hardware capability that they can run these software programs without having challenges in terms of executing the sophistication of the processing requirements that will be there on these platforms.
We are already seeing an escalation of average capacities in the newer platforms that the smartphone companies are designing in terms of the average capacities of DRAM going up substantially into the 12 GB range from the 8 GB range. That is a massive increase in average capacity, which is going to further create a demand boost over the course of the next 24 months. That is all ahead of us because so far, all of the growth has largely been driven by only the data center. We do see this AI trajectory continuing for the next decade. We see that the growth in the data center is going to proliferate to the edge. It'll go into smart cars. It'll definitely go into smartphones. It'll go into your eyeglasses, which will be another important device in the next three years. These device platforms will become more capable.
They'll need more DRAM. They'll need new technologies that are DRAM-based that are going to get more sophisticated. We're seeing a lot of interesting R&D programs looking at requirements for these architectures. The data center growth, of course, will continue. We have tremendous visibility over the next several years of how our customers are planning to use future generations of HBM. I don't believe in the last four decades of the DRAM industry we have ever had this kind of visibility into the roadmap of our customers and this kind of deeply embedded R&D interaction with our customers on some of these really sophisticated products that we co-design with their GPUs because they're starting to move some of their GPU logic into the base die of the HBM. That's your custom HBM product. We are deeply engaged with our very large customers on that front.
We have great visibility into how they're thinking about the evolution of the HBM roadmap in the future. That's a really exciting part for us.
Maybe we could dive into HBM for a little bit. Can you talk about just where you are in terms of the development process, where you are in terms of qualification? Maybe just talk about the implications of some of the architecture changes that you referenced, right? HBM4, as you referenced, is going to have a CMOS-based logic die that you and your peers are all fabricating, it sounds like, at TSMC. Does the introduction of that logic die maybe level the playing field versus your competitors, or does it help you differentiate versus your competitors?
Yeah, lots of good questions in there. Let me start with the near term, and then I'll talk about 2026 and then the future. In terms of our near-term execution on HBM3E, it's going really well. Our HBM3E 12-high yield ramp has been dramatically faster than the 8-high HBM3E yield ramp that we had a few quarters ago. We have already crossed over from a volume perspective in terms of 12-high volume now exceeding 8-high volume. That's a really important milestone for us, and we continue to execute really well on that. We've also been in discussions with our customers for CY26 HBM volumes. We have made significant progress with our customers over the course of the last couple of months. Based on that progress, we are confident that we'll be able to sell out our supply of CY26 for HBM, which includes HBM3E, mostly 12-high, and HBM4.
We have already sampled HBM4 to our customers. They will start qualifying HBM4 when their own GPU or ASIC platforms power on and get to that qualification phase. It'll be a multi-month qualification for all of the HBM4, and that HBM4 will ramp when that qualification is finished. It's not started in earnest till those platforms, those newer platforms of our customers, get ready to test. So far, they're going through just some initial testing, but more serious qualification will start then. In terms of looking past HBM4 and getting to HBM4E, before I get to HBM4E, one point I wanted to just highlight is HBM4 for Micron is in the same technology node as HBM3E, which is 1-beta. Our 1-beta technology node is a very mature node performing really, really well, so HBM3E and HBM4 are on that same node.
For at least one of our competitors, HBM4 is on a 1C node, so there's additional work to be done there to prove out HBM4 on a new technology. We have this advantage that HBM4 is going to be on a mature 1-beta node. When we go to the next HBM product, which is HBM4E, that's where several customers are looking to customize. This customization is in the form of some new logic in the base die. Just as a refresher, we have this base die on which we stack, for example, 12 DRAM die. This base die is largely a CMOS die with some interface logic for the layers of DRAM die that we have, and we do have the opportunity to put more logic in that base die.
In HBM4E, some of our customers want to put some of their own GPU logic inside the HBM base die and customize it for their needs based on specific functionality they want to offload from the GPU into the base die and create more of a custom HBM product that is unique to them and unique to their specs. It works differently from the HBM standard product that would be used by a broader set of customers. This customization is, I see, as a very positive thing for the industry because outside of the highest volume couple of players in the industry, most of the customers are going to only have the bandwidth to work with only one or two players in the industry to do this customization because it's a very expensive process to do this customization.
It's a significant amount of R&D bandwidth that has to be dedicated to it. Using your IP to put into the HBM base die of another customer and then qualifying it is a very expensive process. It's not easy to do that with three different suppliers. Most companies outside of one or two will most likely work with only one or two HBM suppliers. For a good portion of the market, going from three suppliers down to two or even one obviously changes the landscape in a very dramatic way. It's a very important change. The deep R&D partnership over multiple years in which we co-design this HBM product is also super important and very useful. If you look at our capabilities on this front, we definitely believe and have this input from our customers that we have the best HBM product on the planet today.
Our HBM product has the highest performance and has 30% lower power consumption than the next competitor. Because of that and because of the fact that data centers are largely power constrained, this is a really important differentiation. On top of that, we have R&D teams in the U.S. which work super well and super closely with R&D teams of our customers in the same time zone, often in the same town or city. The other aspect is, you know, Micron is the only U.S. memory company. There is a tremendous desire to have U.S.-based partnerships in the U.S. with U.S.-based manufacturing. We have made a $200 billion commitment to U.S.-based R&D and manufacturing, $150 billion of manufacturing over the next 20+ years and $50 billion of R&D over the next 20+ years in the U.S. It's a really significant level of investment in the U.S.
We are the only memory company in the world that is going to be building front-end fabs in the U.S., the first one of which in Idaho is already in the works of being built. All of these positives are very big for our customers and make Micron a very key partner for our customers to be able to make Micron one of those two or three companies that they want to partner with. It positions us really well for the future.
Thank you. Are there any questions?
What are the features of the embedded GPU embedded in there? What features, what kind of apps are they looking for?
Can you repeat the question?
Yeah.
Yeah, the question is about what features are customers looking to embed in the base die. This is a little bit different for every customer that's wanting to do a base die. We are trying to obviously make sure that there is an adequate level of differentiation in creating those features embedded in our base die. When there isn't that much, we try to generally push some of those customers towards using the standard HBM4E product. Those who truly have something unique that they want to do that our standard product is not able to accomplish, we focus on what kind of IP are they trying to do, what are they trying to accomplish from a system-level perspective, and how will that IP integration be integrated into our base die in terms of the design flow, timeline, schedules, etc.
In terms of the specific functionality that these customers will want to put in the base die, I'm not able to comment on that because that's highly confidential and part of the differentiation that each customer is trying to drive in their GPU subsystem. It would be inappropriate for me to comment on what exactly that differentiation is. It is a little bit different for each customer.
Any other questions? One more follow-up on that, Sumit. I appreciate you walking us through how you're thinking about HBM4. When we get to HBM4E and we move to custom interfaces, I assume you'd expect to be compensated with a premium on that for all the work that you're putting in with your customers. Do you also think that when we get to HBM4E, is your expectation that you could have differentiated pricing? Today you have better performance in some of your memory chips, but it is still a commodity market, so everyone gets the same pricing. When we get to HBM4E, could that be the end of commodity pricing in the memory market?
Yeah, I mean, I think when we look at our portfolio, we definitely have a big focus on creating substantial value for our customers. In a lot of different products, we have been able to create that differentiation. I don't think of it as the same pricing for all the customers. There are lots of different things we are doing in our portfolio to create that value. For example, the LPDDR that we take into the data center, right? This is a special capability that Micron led the industry in, in terms of being able to enable low-power DRAM that historically has only been used in laptops and smartphones to be put into the data center.
Till today, Micron is the only company that has been shipping in large volumes into the data center for the last nine months because we have had that lead in the industry to be able to do that. Similarly, when you look at, for example, QLC NAND, we have created products where QLC NAND can provide the same capability from a specs perspective as TLC NAND from our competitors. It has been really, really good capability. We also lead the industry in LPCAMM2, which is a CAMM form factor for LP in the data center. We have shown this differentiation in HBM to reduce power consumption.
I think across the portfolio, we are driving for differentiation in different parts of the portfolio, which is what makes some of the current times such an exciting time for us in the memory business because we have not had an opportunity to create so much differentiation in our portfolio in the longest time. Coming to HBM4, which was your specific question, and HBM4E in the future with custom, absolutely, we see HBM as, over time across the cycle, a higher ROI product versus the rest of the DRAM portfolio. DRAM is, as it is, a very strong, very robust ROI business across the cycle. HBM, we expect to be even higher ROI than that.
HBM4E, because of this customization and more direct, longer-term, deeper relationships and more of an ASIC-like model with customers, gets that portion of the DRAM business into a very different business model with ASIC-like features, which is really a positive as well. This is how I think the transformation of the DRAM business model that we are in the midst of and continuing on for the next five-plus years is a really exciting time.
Great. Looks like we're out of time. Thank you very much.
Thank you.
Really appreciate it.
Thank you all.