Micron Technology, Inc. (MU)
NASDAQ: MU · Real-Time Price · USD
746.81
+100.18 (15.49%)
At close: May 8, 2026, 4:00 PM EDT
757.35
+10.54 (1.41%)
After-hours: May 8, 2026, 7:59 PM EDT
← View all transcripts

Analyst Meeting 2015

Feb 13, 2015

During the course of this meeting, we may make projections or other forward looking statements regarding future events or the future financial performance of the company and the industry. We wish to caution you that such statements are predictions and that actual events or results may differ materially. We refer you to the documents the company files on a consolidated basis from time to time with the Securities And Exchange Commission specifically the company's most recent Form 10 k and Form 10 q. These documents contain and identify important factors that could cause the actual results for the company on a consolidated basis to differ materially from those contained in our projections or forward looking statements. These certain factors can be found in the investor relations section of Micron's website. Although we believe that the expectations reflected in the forward looking statements are reasonable, we cannot guarantee future results levels of activity, performance, or achievements. We are under no duty to update any of the forward looking statements after the date of the presentation to conform these statements to actual results. Good morning, and welcome to our 2015 winter analyst conference. For those of you in the room, thank you very much for making the trek to, Balmy, Arizona. I'm imagining for some of our northeastern friends in the room wasn't that hard a trip, actually arriving here given the weather conditions we were we keep reading about. For those who are in the web webcast, again, thank you for joining us. We are very excited to have you here today because we think that the message that we're delivering around Micron's future is really one of excitement and opportunity. When you think about where we're coming from, we feel the company is positioned probably better than ever in the history of Micron. And what we wanna talk to you today about why we think that. And the investments we're making, investments in technology, the investments in market enablement and product development, the investments in future solutions and actually to compete like we've never had before. So today's agenda is set up very much like that. We wanna talk about the positions technology, and our financial strength to invest. So after my presentation this morning, Scott Deboer, Vice President of Research And Development will come up and talk about the technology roadmap and Micron, both in the short term and long term. Then each of our BU leadership will talk about their individual markets opportunities, both in terms of enablement, attractiveness of each of the investments we're making in their sectors as well as, the product development efforts we're driving with our customers. Now after the BU speak, Brian Shirley, who's our vice president of Memory Solutions, is gonna come up and talk a little bit longer term about where memory is gonna play, and advanced memory technology will play in future systems, end systems, future end segments, internet of thing, strategic data, cloud, analytics, emerging storage applications. And so that segment will be a little bit forward looking. And finally, then, of course, Mark Darka is gonna come up and talk about the health of the company financially and how we see the the opportunity for Micron going forward. Coming off of a record year in 2014, where we achieved over $16,000,000,000 in revenue. Had our most profitable year, you can kinda look back and see how we arrived there. Micron has played the role of innovator, bringing technology to market, of consolidator in the industry, as you can see over to market. So when we talk about a shift in our business, driving to more solutions orientation, We've had a history of doing that in DRAM and moving now in demand, and our total business is well positioned for this going forward to see that memory market conditions is very favorable. We talk a lot about consolidated supply base. The competitors in the industry. And I think what's important about that isn't just in terms of capacity, but it's also the scale, the ability to invest long term to take advantage of these opportunities. The scale we're out at $16,000,000,000 in revenue ability to invest at the right level to compete is really something that enabling Micron to take advantage of the market. As you might imagine a few years back at a dramatically lower revenue base our position given the market consolidation and how we are positioned in that base. On the supply growth area, I'll talk to this in another slide, my follow on slide here, but suffice to say that we think it's just more rational in terms of the investments being made in supply. And we think that this will maintain going forward for a variety of reasons. Let me just hold that for my next slide here. I think the most attractive part about the business for us on this, you know, as of the industry structure is the continued development of these diversified end market segments No longer is the memory business limited by the behavior of any market segment. As a matter of fact, I think that our ability to navigate segments and optimize our business For each of the segment opportunities, both from a manufacturing and go to market perspective, one of the key criteria is for our success going forward. So again, we continue to see the memory business and the market conditions is very favorable for us, and we look forward to, investing as such. So I said that I get back to the supply growth piece of the, the memory industry In DRAM today, you can see today that the bit growth in met DRAM is is is well at towards the lower end of historical lows. And we think that will continue. Why do we think that will continue? 1st and foremost, the complexity of newer process migrations. It's kinda interesting that the actual benefits to cost benefits while still financially attractive at some level are not nearly what they once were and yet the complexity of doing so is higher. And so when we look at DRAM in the market today, we don't see any massive change in the supply structure. And we think that we will continue to see a pretty balance market in DRAM, which will allow for good returns. On the NAND side, it's still relatively low historically, obviously. And this includes what we know about publicly, about people's capacity plans. And we still this will be measured against financial returns because the end market applications are attractive enough for us to generate strong returns. I think what's different about DRAM and NAND is that in NAND, the demand picture is much more or less I think you've heard us say on on calls and prior meetings that the elasticity in NAND is higher in may see more volatility. But we also think the end markets in NAND, whether it be storage or mobile our continued growth type 4, NAND. So it does have a lot to do with the migration to technology and the available supply. But we think NAND, the near term is relatively constrained. Constrained because of the trade off between, the economics and inside the factories are of 3 d conversions. And also because of the current landscape and the people who have access to the technology. So we see NAND as, constrained over the near to long term, in the business. And that, you know, when you have a period of, like this, we think that they it'll continue to remain healthy for some time. Interestingly enough, on the DRAM side, we continue to see strong demand, probably more so than we might have thought 2 years ago in terms some interesting end markets just like in, mobile phones, for example. I don't think any of us thought 2 years ago that we're gonna sit here and see mobile phone configurations of 3 gigabytes per short per unit per phone. So when we think out end markets and servers when you talk to server manufacturers and they can't get enough bid growth. They putting up DRAM and servers for optimizing performance. So we're very pleased on the DRAM side for the demand picture of our end market and, again, very excited about long term growth and up upside in in in the NAND solutions side of our business. You know, it's interesting. We've talked and talked about solid state drives a lot, both in the client and the enterprise, but the penetration of that today, given the cost trade offs, is still low to what the opportunity is. So we feel, again, very, very, very favorable markets in terms of supply and demand equation for the business as we see it today and going forward. What does 2015 bring for us? Well, In one word, it's about execution. 1st and foremost, on the technology front, we have numerous us and technology capability milestones we're trying to hit at the company, and we're excited about that. And without stealing of Scott Scott's thunder on the technology area, we feel we're in fantastic position. Both on the DRAM side and within the landscape of of upgrading our factories and and and innovating with new technology, as well as, in NAND, both at the product level, a technology level, product level being the enablement of, TLC and at the technology level with, vertical Again, Scott will update you here shortly on that, but in the technology area for us is a big year on execution on bringing these products to market. And getting them into manufacturing and outdoor customers. In terms of product enablement, taking this core memory and advancing our solutions capability is a key priority for our company. And not only in R And D, but within the BUs and the engineering teams and the adjacent, packaging organizations, controller teams, our company is focused on how to add value to our core technology. And it's not just because that sounds great and it represents higher margin opportunities. It's because the market is asking for that. Our customer relationships are shifting. They're shifting in many ways, but perhaps the most significant way is as you think about those diversified markets, I mentioned earlier, the discussions we're having with our customers are about, how do we enable you, Mr. Customer, to develop an advantage with Micron Technology? No longer is that conversation about, hey, I woke up this morning and I saw that DRAM price exchange had a price of this. Let's talk about what our price should be. It's about how can your memory technology give me a differentiation when I'm out competing in the marketplace? And that discussion requires is us to think about how do we enable those customers of ours to drive differentiation. And again, it's core memory and then product enablement capabilities that we're army business is that we need to have this flexibility because no one segment for us dictates or dominates our financial performance or how we go to market. And so we need to be able to react to demand and supply indications at an aggregate level of all or assembly and test or supply chain, we have to be reacting to market demand signals to shift our capabilities to deliver where that demand is. And the flexibility to do so has never been more critical. But the benefit of that is the diverse a very stable business platform going forward. And we continue to invest and be able to be flexible and and adept at meeting our customers' needs across a whole variety of set which is sizable investments in this capability. When you think full perspective, it's really the capability to take this core technology, to do what I said, to add value, to develop solutions that are unique for a customer that only Micron can deliver. And that really comes into, you know, 3 categories as we think about these investments. 1st and foremost, emerging memory. Well, you're going to hear Scott talk about an emerging memory is that memories will be developed for specific app areas. I think about storage and the role of storage and performance and power reliability, what customers are really looking that fits to a specific application is a much different climate than you might have seen in the memory business in the past. And then what do we do with that memory? How do we integrate that memory. Whether it be packaging or controllers or firmware, we're investing very heavily in adding resource in the company and capabilities to take that memory and, again, do something unique with it around a subsystem or system level capability to drive enhanced differentiation and greater value. You know, you might imagine that years ago, delivering quality to a PC company was our top priority. It was. But now as you think about the flexibility I commented on earlier and the importance of quality at a much higher level when you think automotive companies or enterprise storage or networking companies, investing in those capabilities to deliver not only the best technology industry, but world class solutions at the highest level of quality. And that flexibility and that performance and that reliability allow us to be a world leader in differentiated memory solutions. On the right hand side of this chart, you also see a investment and people. As our business has changed, we've continually looked for ways to strengthen our leadership team. And had a number of conversations last night because people have noted, wow, there's a lot of new, unfamiliar faces here at the dinner from Micron. And we will continue to invest as we bring in what we think are the best world class executives to help us our goals of shifting our business to differentiated value add capabilities and whether it be in storage or advanced compute architecture, whether it be an assembly manufacturing supply chain capabilities, areas that we need to upgrade enhance and build capabilities on a great team to lead us to this new business opportunity. We think will make us the world's best memory company. We're very excited, Mike, Ron, to have you here today and, look forward to, a great morning. So with that, I'll hand it over to Scott Dabour. We'll be back with Q And A for Mark, with Mark and I later. Morning. Thanks Mark. I'm gonna start off today just with a high level view of a lot of the different technologies. And, then I'll go into several of these in in more detail and and try to touch on some of the the big themes, for us. As Mark mentioned, This upcoming year is a huge year, an exciting year for us in terms of technology execution, and we think we're really well on several fronts. So on on the DRAM side, we're continuing to be very pleased with the technology execution post post the acquisition. Some really great results. I'll go into some detail on these for you on on our 20 nanometer, technology rollout, both in Hiroshima and in Nanatera. Both doing really well. And in addition, we've now got what what we call our our 1 x nanometer technology moved from Boise over to be focused on in Hiroshima. So, our development models is, working as planned and letting us focus more on the future. On the NAND side, good progress on our 16 nanometer TLC, which is our last planer node. And I'll talk quite a bit about the the, the decisions we've made and and the benefits of the the, 60 nanometer TLC planer to vertical transition, that we're gonna going through. And and we're very pleased with with the progress on our 3 d NAND. So right now, we've got big focus this year on, of course, our 3 d NAND roll out that we've talked about on generally the same time frame we've been referring to for the past, year or more. And also now our focus shifting over to on the development side to a future generations and establishing a long term roadmap on 3 d. So good progress there. On the on package technology front, really a a year of of both, continued maturing of of the 3 d pack technology that that is our HMC products that we're we're delivering to customers now. In addition, strong focus on on the R and D package technology side on enabling the next generation, which is really, coupled with our 20 nanometer DRAM products for for even better performance in the future. And then I'll I'll I'll give you some some additional detail. A lot of activity in the new memory front is is, Mark referred to. I think we are, continue to be uniquely positioned with with great partners, and really looking some some diversified applications in the future, and and I'll talk about where we're going with our new memory technology. So I want to start out on, today with with a little bit of detail. We talk, Mark referred to the pro the complexity of of DRAM. And we often refer to, you know, complexity in moving to 20 nanometer and beyond and and the impacts that that has on on the the business overall. I wanted to try to quantify, a little bit what that looks like. So on on this particular graph, what we've done is compared a node from, 7 or 8 years ago. And at that time, it gave us a 100 percent, bid increase per wafer. It's actually a transition from 50 meter to 30 nanometer. And look at the process complexity of that relative to, today's world where we look at what it takes to go to 20 nanometer. And this I picked 30 nanometer 20 manders because it gives you effectively the same kind of bid increase. In both these cases, you get a 100% bid increase of on a waiver. If you look at the historical difference, it's many times we talk about increasing mask levels, and and we quantify that. And certainly, you can see the the difference between, you know, the node 7 or 8 years ago and and and today, where we had about 10% more masks now looking a 35% between these two particular nodes. But it's even more striking when you say, okay. We've added more masks, but, really, the process complexity in between these masks in order to enable the node and the process control that that we need for that node to yield at high volume, is really is really exploding also. So, you know, more than a 110 increase in steps between those mask levels. And then these these 2 things lead, of course, to the things we have talked about at at a high level, which are, you know, takes more cleanroom space to to get that 100% more bits today than it did, 7 or 8 years ago, substantially more. And then, of course, that that scales with, number of process steps scales with CapEx and and and how many you get out of a fab. So generally message here on on on just trying to quantify what we what we mean when we talk about process complexity. Now I wanna shift over to more on technology execution. As I mentioned, we're very pleased with the way 20 nanometers have gone. We had a we had a big endeavor, with with the merger of you know, LPDA and Micron, putting the technology teams together, and putting together a roadmap to go execute on DRAM, put ourselves in a strong position in the future. On this, particularly graph, what I'm what I'm showing is is effectively the time from start in manufacturing to, how how the yield ramp looks through a period of time. So it starts in the same place for all these notes. And as you can see, we set a pretty aggressive plan, which is the blue dotted line on on 20 nanometer. We've had good confidence in these teams coming together. We set a very aggressive plan relative to what we'd historically done on nanometer and on 25 nanometer. And to date, we've exceeded that plan, considerably. So when we say we're, you know, we're very pleased with what what's going on on 20 nanometer and heroic and in Natera, that this is the kind of thing that that we look at that that drives our enthusiasm for the the execution of the team. Continuing to look now, as I mentioned, we put 1 x nanometer into Hiroshima. We've got good focus on that. And then we're gaining confidence on on additional nodes, beyond 1 x and 1 y. I think I think we've we're we're getting a lot more confidence in the ability to scale DRAM farther out into the future. On the 3 d NAND side, a com a couple of a couple of points here. Certainly our our our we we've view our execution on 3 d NAND, really to be hitting hitting right at, plan that we've had. You know, clearly, there's lots of discussion about how how far you can extend plan and I'll I'll talk a little bit about the performance when you when you trade off, you have, when you scale plan or name. But I wanted to look a little bit about how, what we see as the change in in in bit per wafer and and the cost structure over time, between planar and 3 d NAND. Couple of things here. And and, hopefully, out of this, you'll see why we feel confident that that our transition from 16 nanometer TLC to 3 d NAND was the right time for us to shift all of our resource and and put them firmly behind the 3 d NAND, direction. When we look at the BISPR Wafer, and and you can see kind of where a 32 tier, MLC park falls here relative to a scaled, planar node. And you can see there's some advantage, but it's not a it's not a huge difference in bits per waiver. And we've talked before about, the cost and the and the wafer trade off, how many wafers you get out of a a given fab for vertical relative to TLC. But on here, you can see it it's really a story of where this goes over time now. It starts diverging with a with a with significantly more cost of our bits per wafer, over time as you go to 32 tier TLC. Once you start going beyond that, you start getting separation in bits per wafer. On the cost side, it takes a little bit longer for it to catch up because of the, because of the challenge we've talked about on the space and and and the capital involved. But ultimately, for a a cost direction here, year. We view this as more of a historical cost per bit, ultimately scaling relative to what we had on Planer, you know, over the past 10 years. So generally, the cost story on 3 d NAND continues to build and continues to get better. I wanna talk now on the next slide a little bit about the performance side. And without going into great technical detail here, fund fundamental you can think of of the met here, which is electrons per cell. But fundamentally, you can think of that as as really the the core capability of the memory that that the other capabilities are built around. When you talk about Endurance and and these things. First off, you have to have, you know, a certain number of electrons per cell. The things the things to note on this, on this through time on playing or NAND. The con the performance of the NAND has continued to degrade at a at a fundamental level. And we've managed that through through controllers and and error management and in that, but, but it continues to get more and more complex from a, from a managing the performance point of The big disruption on 3 d NAND, is it really steps back to something between a a 50 and a 70 nanometer type technology for raw performance of the NAND. So this is a big advantage for 3 d NAND. The other thing to note on this that that turns into really performance for long term is as you scale the path by going to more and more tiers of vertical NAND, the raw performance of of the cell capability he isn't degrading any longer the way that the planer did. So you have a a different kind of a battle here. So we've we've view this as a real, long term, performance benefit for 3 demand as Next, I'm gonna shift over to future memory technology and kind of frame this up in in the way we look at things. On on on the roadmap enablement piece, we look at our DRAM roadmap, our NAND roadmap, and then, you know, what what else there is out there on on DRAM, as I mentioned, we're getting more confident in multiple nodes of scaling right now than we have been over the past couple of years. At the same time, we still view an opportunity for for intersection at some point in the future with with a a type of memory that can extend the high performance spectrum or end of the memory spectrum, the DRAM hold, into the future with with new memory technology that that fills that space. So part of our new memory focus is is clearly around DRAM performance type enablement. On the other end of the spectrum, on the NAND side, we really don't view, a technology need or one that we see out there that's capable right now fundamentally being a NAND replacement. And part of that is because of our confidence level in 3 d NAND, and we see it extending for for a long time at a at a very cost improvement basis, with with with, really strong performance on on just the pure storage side. So we think that's covered. What what we see is is opportunity in the middle, for what we call storage class memory. And and Brian's gonna talk more about a lot of the applications that still helps drive many of these are very customer specific, and we think this is a real strength of Micron. We have good partners. We have good customers that wanna work with us on this to go enable a type memory that has a performance and a cost, position relative to NAND and DRAM kind of in the middle. So better performance than NAND, but better costs than DRAM. And and we think there's a lot of opportunity in there for for things all the way from, mobile to data and and I think that, those kind of applications require close partnering with with, key customers, and we think we're in good position on those. On the base technology to enable some of especially the, the storage class, then we're focused on things like Smentor, and resistive RAM, and others that we're not talking about today that, that we think enable some of these storage class locations. And, you know, or middle of last year, we had a we had an announcement on a 16 gig, R RAM, which is a kind of storage class memory. These are these are, nice milestones on the way to actually having something that's commercially viable, and and we think we're we're headed in the right direction on that. Last, I just wanna finish up with a little picture of of what we're calling our innovation road map. We we look at this, and I think the notable thing is is really this overlap of big technology execution programs in 2015. Mark referred to that earlier. I'm sure he'll get brought up a lot, today. There's a whole bunch of critical technology milestones, and I think we're really well placed coming across these on on 20 nanometer DRAM on 3 d NAND. The roll out of of, at least the development effort on the next generation of hybrid memory cube. And then and then some things that we'll be talking about on new memory probably a lot more detail starting, you know, sometime second half of this year. So, we're feeling very bullish about our technology efforts and what we've accomplished over the last year, and and we think things are looking pretty bright. And that's where I'm gonna finish. Hi, Scott. I'm Srini from Summit Research. Question is what node would be the green in land at? What the will be 40 nanometers or 15 nanometers or 30 nanometers. Sure. So our, technology note on 3 d NAND is is a little bit complicated to talk about. So, for example, some of our competitors talk about a technology node in terms of the, really, the thickness of a film that's that's in their stack, which, ultimately, is a feature of the memory, but it really isn't very relevant. It doesn't change the bit density, whether you're at that 30 nanometer or if you're at 20 or if you're at 40. The bit density is still the same. The ultimately, what what we look at is is, not technology node. Well, we have litho levels that are immersion on on our 3 d NAND that are probably similar to, to what some of our competitors do. We look more at optimizing really bits per area. So when when and and we will be, very competitive, if on our bits per area with with, anything that's in the market today. So 30 nanometers a thickness, which I I don't wanna talk about today of what our film thickness is gonna be, but I I will say it actually it it doesn't mean as much as it's being played out to me. That one's tough for me to answer too. It it it does if you are running, Yeah. It it does. Our our our technology will have the the similar number to electrons on on it to what what other competitor technologies do. Okay. And we'll be able to answer that exact question for you a lot better here, over the next couple of months when we're a little more public about what our tech you. Scott's here. Sorry. I can't see. I can't see. You talked about you had the slide about technology map, especially on DRAM, 29 away, they all the way down to 1 Z. Yeah. And then earlier on, you also have a slide talking about increased number of steps, especially non level and how the clean room, space changes. Can we take some of that information and try to, overlay it as you go from 20 to 1 x and 1 y. I'm just trying to better understand how complexity changes and how much of a clean room you have to give up. Well, the the trend on on DRAM, for increasing complexity is yes, it absolutely continues. And a lot of that is driven. I I think probably, as you know, by by the number of multiple earning levels you have to put into the process flow, which which does continue to increase as you go to 1 x and 1 y. So I think the the trend is is similar for the next node relative to what I what I showed you earlier. I mean, it's a it's another pretty pretty substantial step when you to get another 100% you know, to get to get, to get that complexity down if you go to, something like EUV, certainly we would we'd be looking at something like that over time to to help with that. But it doesn't completely solve it either. So part of the what Mark alluded to earlier is we just have to be very, cognizant of, the the the trade outs we're making. We look at a new technology you know, and and we're very, aware of how much complexity we're adding versus how much benefit we're getting. And, of course, we do the note only when we get that benefit, but but it's it's more complicated than it used to be just to to do a scale and have it make sense. Well, I I think I think, the cost curve gets it it's going to it's gonna show you mean, the cost reduction per note. Yeah. It's it's absolutely, challenged. You wouldn't do a note if you didn't get a cost improvement that was substantial, but it it's more challenging to to get the benefit of a of a given note. Without a graph, it's hard to talk to exactly what it is, but absolutely it's challenging to look at the complex we're bringing in and then make sure that you've aligned everything so that you actually get a cost reduction with the next node. Performance is a is a different thing sometimes. But, on the cost side, it is it is more challenging now than it was 10 years ago to to know that we should do another node. I'm sorry. I didn't Scott, can you can you repeat that question? It it seems like the process recipe for 1x and 1y may not be finalized. This is why you don't have this slide to to be able to better assess the increased complexity. We're still working on Well, that's that's historically always been true. I mean, what what we do is project off of you you never know what the exact process recipe is 3 years from now on on a given node. So that that's absolutely true. But we have pretty good projection for what the raw process this time changes are on given process steps that we know are gonna be in the flow. And we know how much it's gonna increase. So we actually have pretty good projections for how much plexity is gonna come in on a one wire or one keynote right now. But you're not gonna share them? And I'm not sharing them yet. Yeah. Harlan, sir, with JP Morgan. Thanks, Scott. Thanks for the presentation. I got two questions. Number 1 is, when you talk about 3 d introduction this year. It's 32 layer. Are you guys planning to, roll out 2 bits per cell or 3 bits per cell. Where's the focus gonna be this year? Sure. Our focus is absolutely on 3 bits sell, but we will be rolling out some products, are are better suited for MLC depending on the exact performance and density need. So we'll have MLC products, but absolutely, we are very focused on TLC, coming out really early. Great. And then, I mean, obviously, you know, you guys talk about and your peers talk about the complexities associated with 3 d NAND. We hear about yield issues. So at the end of the day, it boils down to manufacturability, and it boils down to can you continue to drive costs for bit scaling? So the that I have for you because it sounds like your 3 d NAND process is already finalized and you guys are working on manufacture ability. So help us understand cost per bit 3 d NAND TLC relative to your 16 nanometer plane TLC products. Are you guys at cost per bit parity yet or not? We are going to be at cost per bit parity, later this year. Most likely, we're still in an early phase as I'm, you know, as Mark and others and I think we had on the slide. The real ramp in Singapore is gonna start this summer, and we'll be ramping it through the year. So always when you're first starting up, it's it's actually a little bit hard to get get, parity until you have enough wafers running in the fab to have a real cost basis for something. But our 32 tier TLC absolutely will be at a better cost structure, than our 16 nanometer TLC, which are kind of this this graph, kind of looks at where at at the right yield levels, the difference between what our 60 nanometer TLC would be and what our 32 is. Hi. Thanks, Lakisha. This is on the same chart. If you look, seems like 32 tier TLC is not giving you much cost advantage with your 16 nanometer TLC. So that means once you do that, no transitions, that means you don't get 1st year much cost decline as you transition from planar 16 nanometer to 32 tier TLC? I'm not sure I caught the question there. So if you draw the line, 32 tier TLC seems like very similar cost with your 16 as a meter TLC. Right? It it's a little bit lower, but, yes, that I mean, that that was a a comment I made earlier is is really your you get some cost advantage on the 32 tier relative 16, but it's really a strategic direction that you know you're headed down a path where the cost advantage continues to get better and better over time. So that means you will just do some 32 tier and then move to 48 tier fast so that you get the cost benefit. We we will move real you know, it's it's it's it's combination of getting the cost structure on the right path, as well as enabling some, really, some key products that that that Darren and Brian and others to be talking about today, with the performance of 3 d NAND. So there's there's 2 benefits. Hey, Scott. You showed a chart showing that 20 nanometer yields were progressing faster than prior generation nodes. That's a little counter intuitive, given all the discussion on increased complexity, how should we think of maximum yields at 20 nanometers compared to previous technologies, just given the complexity. Just help us understand why 20 nanometers can be better than prior nodes across a number of these metrics. Sure. Well, I'll I can tell you from a micron point of view. And, you know, I I think we we do see, mature yields at at a very similar or, you know, I wanna say better, but, I mean, at first case, I think it's at least similar to what we've seen historically. We've, you know, as Mark talked about investments we're making, through through the LP acquisition and moving forward, we've invested a significant amount of resource that neither of the two companies by themselves had in terms of process capability and, ultimately yield capability. Focus, just with with the bandwidth we're applying to this right now. So, absolutely, if we'd had that same same band with through history, then, 20 nanometer for us would be challenged relative to some of those other nodes. But, really, we kinda went through a reset in we do DRAM development and the and the kind of focus we're putting on it over the last couple of years. So we're we're pretty bullish on where 20 goes relative to anything we've done Hey, Scott. BJ from Storn Energy. It's because on Cityland, when you look at Cityland, it's pretty high fixed cost. Right? I mean, your output on the fab goes down to 50%, you know, around there. You have some increase in bit growth. When you talk about cost parity to 16 nanometer, what kind of wafer output are you talking at to get to that cost parity? The threethirty versus 60 nanometer? Well, you know, one of the points on this one is, you know, the bits per wafer on this 3 d man node are actually substantially higher than 16. You just don't get that all passing straight down to cost. So, you you're gonna have to be in, you know, sub 3 or 4000 wafers a week when you start seeing what your actual costs on 3 d r and then and then the cost parity or the, you know, the cost comparison is more valid. When you're when you're running small numbers away first, the comparisons is not very valid. But as soon as soon as we have a stable line, you know, ramping up through through the second part of this year, then it it won't be too long until we hit good costs. Can you talk about the capital spending ramifications of this and, you know, I think you're spending less than a $1,000,000,000 on NAND this year. How much does that buy you? And then what what are the economics of sort of adding three d capacity versus converting cleaner capacity, 2, 3 d? I'll I'll probably let Mark cover cover a few of those at at the end of the day. I know he's gonna have some comments relative to to that capacity overall. We we are spending a lot of time looking at at as as as we've talked about at adding some white space. We have some white space in Singapore now. And and then we'll do a a a careful view of, when it makes sense to convert planer over over the 3 d NAND. But I think I'll I'll punt that one to Mark a little bit to let him let him talk about that one. Yeah. Hi. Carl Ackman from Cowen And Company. You know, relative to 3 d NAND, I think Park was saying that 3 d does not necessarily add to bid growth absent wafer capacity adds, but I think everybody's gonna add pretty wafers. So I guess the question is how much do you think 2 d will will ultimately be converted converted to, to 3 d? Well, I can't speak for for our competitors' strategy I I can say from our point of view, we we are gonna convert some of our our our planer NAND over to 3 d if we look at a site like Singapore. But at the same time, we have other important businesses that depend on, you know, high high performance, plainer NAND, no that that we're gonna support for a significant period of time into the future. Jeff will talk some about some of the embedded opportunities, different there that are good good business for us to continue to to drive planar wafers. So we're not gonna convert all of our planar wafers to 3 d NAND, but we're absolutely gonna convert some of them. Hey. Question on EUV. Can you talk about where it is on your roadmap today, and how many layers you would expect to use initially and and how you think about cost of ownership there in terms of, you know, water adoption? Sure. So right right now, you know, there's been pretty good progress on UV over the past year again. It's it's still, you know, not quite at the position where it it makes sense for for Micron. So if you look at fundamentally where it comes in, most likely, you'd you'd bring it only in for a couple layers to start with. And but, ultimately, it could replace 10 or 15 of the levels if if it was, at a much higher throughput than it is today and reliability. Point. One of the things that when we look at EUV that is sometimes lost is by the time it comes in at that throughput, point, it it's gonna be multiple pattern too. So EV is not coming in, and we're just gonna do a single print, in very many patients once it comes in. We're still gonna be doing multiple patterning, maybe just 2 multiple, you know, pitch doubling instead of of, pitch quad. So it's not coming in as as an absolute cost savior. It it it's just, it's complicated to bring in. And it's also complicated when you've already got fab set up to run double patterning and quadruple patterning, with equipment that's been in there for a period of time. So you you you got a pretty high bar on something like, DRAM or NAND where the fab's already built. One more question, Scott. K. Great. Thanks for taking my questions. Scott, Doug Friedman, RBC. When I look at your charts, the one that you're showing up there now, it shows cleaner roadmap with 16 going down to a 1z and then a 1z prime. But yet, if I compare that to your innovation roadmap micron you're showing a your road map stopping at 16 planar. Yes. Am I reading that correctly? Is there no more You're reading it. Well, you are reading it correctly, Doug. The the the point on this is our projections, which are the are the the line. So I I'm not saying what we're doing on this note, on this particular graph. I'm saying what our technology cost and bid if we do any of these nodes would be. It's a it's a projection graph. For Micron specifically, we're stopping at 16 nanometer TLC, which is, before the last two on there, which are what we could conceive of somebody else deciding they might wanna do. Okay. Great. So you believe that the industry could have another cleaner step beyond the 16 going to 15 and beyond. Yeah. Absolutely. Okay. I mean, the industry can absolutely continue continue to scale Planner NAND. If you look at the the reason why we've chosen not to is more about this one, and it's a it's about performance. So, you know, if if you're fundamentally looking at a a technology that has hundreds of electrons to deal with and try to make a TLC part of high quality in the future versus one that has, you know, fixed per state. And you're trying to keep track of somewhere between 6 10 electrons to make the plan and end work. You can make it work, but you have to deal with different kinds of problems than in the quality levels. Is the challenge. Some applications that may be fine for. K. Alright. I think I'm done right, Ivan. Let me see you. Okay. Thank you. Okay. How's that? Good morning. I thought I'd start off with a very quick refresher on, on what CNBU is. It is, we have responsibility for selling the Micron portfolio into a PC that includes graphics into servers, both in enterprise and the and the cloud side of that, as well as in networking, all with the exception by our storage business unit. So in talking about the opportunities that we're looking at, there really are 3 you know, 3 basic messages that I'd I'd ask you to take away. The first is around, the sources of opportunity for growth in our business going forward. And the fact that there is an ongoing and continued shift away from c's, which has been the driver, both for the company and for the compute business unit for quite a while, towards, incredible growth driven in the enterprise and the data center. The second is the ongoing continued, growth in the richness of mix of the products that we are selling. With the exception of networking, which has been a very complex product mix for some time. If you go back a few years, these segments could essentially all be with a that's moving towards a very high bandwidth specialized graphics technology. Servers were transitioning over to ADR4. And of course, I'll talk about some of the unique, technologies that we're developing like RL and HMC. And so again, it's a much richer mix of, of products, which we think makes for a little bit stickier business, but it also is one of the drivers for the operational flexibility that talked about at the beginning, and we just got to get better at that, to be servicing that, that more complex mix. The third thing I'll talk about, though, is really building on what Scott has just the technology evolution, building out our transition, completing our transition to 25 nanometer this year, enabling and getting the ramp to 20 nanometer, but just as exciting are leveraging some of the packaging technologies for unique value added solutions that we think can provide a better return for the capacity and for the investments that we are making. So let's quantify a little bit of what I was talking about with regard to the shift in the drivers of growth. I think it's been fairly well understood. The the relative growth between PC and mobile, the fact that the market opportunity is larger in mobile this year than it is in PC and that gap is going to continue to grow forever. And, Mike Rayfield will talk to you after the break about some of the incredible in the U. S. Acute market, how the growth is getting driven in the enterprise and the cloud. You know, I'll talk in a bit more detail about what of those specific drivers are, but it's driving, you know, in the 40s, in bit growth rate on the on the enterprise side, in the 50s on the on the cloud side. These are compound growth out through 'eighteen. To a point where in 'seventeen, the combination of enterprise and cloud will pass PC. And by the end of the decade, each of those segments individually will be larger than our opportunity in the space. Networking, obviously, it's a much more modest size, but it's much more diversified. It also demands longevity. And so it makes it much stickier and a good, a good value opportunity for us. And, the growth of video traffic generally drives a very healthy growth rate there. And while PC is modest in growth, high single digits, it is we've got to keep in mind it's large than the rest of our bid opportunities combined this year and will still be the largest segment for us this year and next. And so important one to keep an eye on. Prietary interface, we think provides a very good value opportunity. So now let me shift and talk a little bit more detail about about each of these markets, I'm actually going to go from the lowest bit growth opportunities to the highest. And starting with graphics. So I mentioned that there is a fairly modest bit growth opportunity, while there is a demand coming from 4K and ultra high density, as well as the need for ever more realistic gaming opportunities. There's also a lot of pressure coming from the ongoing switch towards mobile platforms for people to play games on. And so that's what keeps the bit growth modest. Why we're, where we're interested in it is this drive towards higher resolution, more realistic gaming environments, is driving the for to 8 gigabits a second, exactly the fastest interface of any traditional DRAM component in the marketplace today. And at those kinds of speeds, it requires a intimate cooperation with the chipset partner, that is developing the chipsets as well as with the, the customer that's developing the boards. And that makes it, you know, it makes quite sticky. And so, therefore, we think it's a good, a good margin opportunity for us. Technology perspective, it also happens to be, fairly friendly to a new technology. It's relatively easier to ramp in the markets like server. So it would actually be the 2nd place that we ramp a 20 nanometer later this year after PC. And we see through continued focus on our to drive this bandwidth even higher. We see a chance to double it again to about 15 gigabits per second going forward, which will certainly have opportunity in graphics and may have opportunities in other market segments as well. Okay. So I did talk a bit about how the the, you know, the growth opportunity is really moving away from PC. But again, keep in mind, it is our largest bid opportunity within the compute BU, this year and next. And so it's obviously very important. There is a degree of product portfolio diversification ongoing here as well. Certainly, over time, probably start next year, we'll see mainstream desktops kick over to DDR4, but we're also seeing, driven by battery life and power, a shift toward LP technology, initially LP3, moving to LP4 in the thin and light segment of, of notebooks, again, contributing to that richer product mix, that that we need to be, that we need to be managing. Also, in those thin and lights, typically, that that, that LP component is soldered down on the board. And, again, it makes for, a tighter, a little stickier relationship in terms of, in of how we supply. From a technology perspective, this is the first place, that we will be enabling and ramping 20 nanometer. As Scott said, we're very pleased with where that's going. And it is a, a, you know, a PC product 4 gigabit DDR3 that's on the lead there. And so we're going to start production of that next quarter and you'll see in the latter half of the year, this starts to become a a significant contributor to our business. Networking, 2nd largest growth opportunity for us. And, the growth opportunity here, is being driven in a word by video, whether that is the, user generated video like YouTube, whether that is the, the increasing growth of, of over the delivery of traditional content, you know, from Hulu, from Netflix, even traditional networks like ESPN from you know, from the likes of, of DISH, that is driving a very significant growth opportunity in the infrastructure. As well as in LTE wireless deployment, to deliver all of that bandwidth to the users. You know, from a, you know, from a an innovation point of view. This is an area where we've been doing segment specific innovation for some time. The the segment is very focused on low latency products. And so we've had a family of reduced latency DRAM products, the latest being a so called, rl3 product, that is being, very well received, a very good high value product because of its, of its low latency But this is also one of the areas where, we are leveraging not just the the raw technology evolution that Scott talked about, but some of the packaging and 3 d technology as well. As as some of you may be aware, we have been for some time developing a technology call hybrid memory cube, which takes, you know, basic DRAM technology, interconnects that technology with something called through silicon Vias, which allows power and performance across chips similar to what you traditionally could do on chip. And then connect a stack of DRAM the, you know, through the TSVs, you can manage, error correction, repair, and a very high speed serial interface. It allows for you know, higher density, lower power, and lower latency at an overall, solution level than you can achieve with you know, with standalone components. And one of the areas that is very receptive to this high bandwidth, this low latency, is in the networking space. And as we begin shift qualified product later this year. We'll start to become a contributor later this year and next year. And again, in a very high value segment because of the that the networking space places on this. I think the last comment I'll make here before I before I move on is this is probably the one area more than others where longevity is a value, that is, that is appreciated by the customers. So we take advantage of some of the work that, Jeff Bader does in his embedded group, focused on automotive and industrial markets, with things like a product longevity program because these customers will be selling into operators that really don't want to have to do, rapid forklift upgrades, and so we can get a good, a good premium as well from our, a billion willingness to be providing technology on, on an extended basis. Okay. So I mentioned, of course, the shift in the growth drivers going over to servers going into enterprise and cloud. And, it's really coming from a number of but one of the most significant is a growth of what we refer to as, as kind of high velocity applications. And we think about that both in terms of, so called structured database applications, think of things like SAPs, Hana, in memory database or oracles times 10, where the performance demands on those traditional databases are getting to a point where they can't afford to be operating, paging off of a disk or SSDs. They need to be operating out of memory. And so that is driving, demand for, you know, basically as as as much memory as you can possibly pack into a single analytics, which might be a social network, trying to map the connections between their users to generate the best advertising. It might be insurance company that has to come up with a real time insurance quote. And so again, using graph analytics to come up with a risk profile that they can do that effectively, again, driving, you know, tremendous growth for density of DRAM in a box. And and so if you look at if you look at this, you know, the server business across both of those segments, the unit volume of servers is actually not growing that we anticipate it's in the single digit range. But the content per box is growing in, you know, in the thirties, perhaps even pushing low four And so that's what's driving that growth in enterprise and cloud up to, the mid-40s and in, in enterprise and up to you know, up to the mid fifties in the on the cloud side of things. And and so, you know, so how we're responding to that, certainly at a at a a a very, basic level, certainly managing the transition to DDR4, which is going to occur, initial in the enterprise space, the cloud players generally migrate a little bit slower. And that does provide, you know, that does provide power benefits to the user, you know, from our perspective, that transition actually takes a little bit of a bit capacity the out of the market because we do lose 10% maybe a little bit more bits as we go from, from DDR 3 to DDR4. And of course, from a technology perspective, we're leveraging the advances there. We're completing our transition to, to 25 nanometer, and we're in the process of enabling meter in this space, something that'll be happening, from a, you know, from a from a ramp perspective towards, towards the end of the year. But I think a little bit more exciting in terms of some of the, specific opportunities for higher dense city. I talked about this interconnect technology called through silicon via, which allows very high performance connections across die. That is one of the ways that we will satisfy, this thirst for ever increasing density per server box while still maintaining, the increase in performance, you're seeing server platforms now moving to two point gigahertz and beyond, and to to get that tech that performance level, with a stacked solution, demands won't get you there. And so we're looking at, you know, how we move from a mainstream, you know, 16 gigabyte, server dim today and and with TSB technology, you know, moving up to 64 and even 128 gigabyte, you know, per dim, again, satisfying that you know, that nearly insatiable demand for, for for density and server boxes. And then the other one, again, similar to, you know, the network in the compute world, the piece of this marketplace that has, that need extremely high bandwidth, extremely low latency, you know, managed in a power efficient way is the, the high performance computing space. And so this is the other segment where we're seeing, you know, early uptake of of hybrid memory cube. And again, we are, you know, we'll be shipping a qualified components later in this fiscal year, and beginning that ramp, and we'll see that again towards the end of this fiscal year and into fiscal 'sixteen become a contributor of high value revenue for us in this space. And of course, going forward, as Scott mentioned, we are you know, very, very much focused on, the next generation of, of of hybrid memory cube, which, doubles the density, doubles the bandwidth And again, in both the high performance computing and networking space, we think will allow us to continue to grow very well. So so if I could, you know, if I could kind of summarize where, you know, where we are focused, and and taking advantage of these opportunities, certainly execution, you know, just basic blocking and tackling is very important, that is, you know, working closely with Scott's team and making sure from an execution perspective, again, finishing up the 25 nanometer version, certainly in the, in the, in the network, in the server space, enabling and beginning the ramp of 20 nanometer. And staying very focused on the operational flexibility that, you know, that Mark talked about at the opening because this, this complexity of, of mix demands that we have very good agility, to, to shuttle capacity back and forth, obviously, between segments in, in CMBU, but also with, you know, with the the the other groups that use quite a bit of DRAM, mobile and, and embedded as well. So, you know, critical, critical focus on that, of course, while continuing to drive very high levels of quality and service. As thinking about a way to frame many of the value added solutions that we are working on. When you look out broadly at, a lot of the acute problems that we are trying to solve going forward. The energy and performance associate with how data is moved is becoming as important, if not more important as a as a factor as how that data is being, you know, is being processed. And so if you think about a number of the evolutions in memory interface that we're talking about, we're focused on reducing latency with products like RL3. We're focused on higher band with products like GDR5. We're focused on lower power, with our LP products and the move to DDR4, and we're focused on all of those with, you know, with products like, like HMC. And, again, all with the eye towards, managing that that movement of data. And in fact, if you think about the, you know, the ultra high density drivers in the server space, you know, the the, you know, one of the drivers for things like SAPana and and Oracle Times 10 is to manage, that is to say to reduce the data movement, in this case, not between the processor and memory, but between memory and the storage device, the SSD or rotating, and dramatically reduced churn, which which drives that drives growth. I think going forward, beyond just focusing on how we manage those optimized interfaces between the processor and memory between to put, you know, very, very basic elements of processing into the memory itself. And of course, as you can do that, you can further, reduce the need for for that movement of data. And and so one of those opportunities of course, is the automata processor, which we've announced, and, and Brian Shirley is going to be talking about as he takes a little bit more forward looking view about what we're doing across all of all of the business units. And so with that, again, repeat, Mark's thanks for for joining us here and would like to turn things over for Q And A. Yeah. I'll be here. Thanks. Daniel O'Mear from Lundberg. So in terms of the 3 d man and 20 nanometer DRAM and 3 d, NAND. Can you highlight a bit how that fits into the product portfolio here going forward. And does that change anything from a competitive perspective and what you can offer these customers now? Well, with regard to 3 d NAND, that really is aimed at our storage portfolio. And so I'm gonna defer that question to Darren Thomas as he gets up and talks about SBU. Yeah, from a 20 nanometer perspective, you know, over time, that becomes relevant to all of the segments that we've talked about, just in terms of the the the the nature of the technology and how it evolves as well as the process of getting it designed in, that will, you know, that will hit first in PCs. Be followed very closely thereafter by by our graphics products, and then, you know, a little bit further out as we drive to the necessary you know, quality and reliability levels, you know, that'll hit server. Probably the slowest to move tends to be, the networking player You know, they they want you to keep building the same stuff for, you know, for upwards of, you know, 5 years, but, the flip side of that is they tend to be a little bit slower to a the new technologies. And that's also I just point out, Tom, obviously mobile as well. Oh, of course. Yeah. Yeah. Absolutely. Yeah. Thank you. Tom, in, Mark's opening remarks, he mentioned how the company needs to transform to where you are working more closely with your partner to help them differentiate. Mhmm. And and yet, that wasn't a a real part of your presentation. And and I don't that it's really been a part of the company's history because customer just wanted lower price historically. How do you make that transition and, it this seems like it's very difficult, and yet could be very meaningful. Yeah. I I think that, you know, there's there's one fairly basic level, which is, you know, as you move from you know, JEDEC driven more standard products, towards products that may at some level be JEDEC, but are more differentiated. You know, there's a much tighter cooperation with, you know, the ecosystem, with chipset partners and others, right? And so, you know, a couple examples. I I talked about, you know, the, the very high performance graphics products. And and that ends up being a very, a very tight relationship wean, you know, the, the graphics, processing companies, to make sure that that interface is tuned and working well, even to exactly how the customer lays out the board because how you lay out the traces there is important. So that's one area. I think, you know, HMC is another good example where, again, both with the, the, the net working community and with the high performance computing community. You know, we, we have a very, you know, a very tight discussion with them well in advance. And so, you know, right now, we're talking a lot about about the Gen 3 HMC to understand exactly what their workloads are, you know, what their what their compute problems are and how we can be tuning, you know, our solution, particularly from the controller in firmware level, you know, to make that work. I I think, you know, in terms of how we help drive that transition There's a whole bunch of things that we're doing, but certainly part of it is the, you know, is the team and and how we augment that. And so, you know, Mark had listed some of the people that, you know, that are new to the company. And I'll just touch on a couple of them that are within CMVU. You know, at at a very high level, I think we have long had many people that do a superb job that think about the system from the let me think about memory from the perspective of the system. And what can we be doing in driving the memory technology and interface so that it could be a more valuable and helpful part of that system. And and so, you know, so the guy that's that's driving, automata and, and some of the advanced, work that we're doing again to try and take advantage of of reducing this data movement problem. Guy named Steve Pawlowski. He joined us about 7 months ago. He was, he had a a long career at Intel, in various roles driving, CPU and platform architecture. And so that's something I think that brings a different perspective, more recently we, we hired a gentleman, Bob Quinn, who had, actually founded a company called 3Leaf, which is a very innovative, processing and note interconnect company that got sold to Huawei. So in addition to understanding the compute and networking space, actually spent 3 years working for Huawei, a very useful perspective on the China business. And actually before he joined us, was, chief technologist at LSI looking at storage problems. And so in this in this, this infrastructure space, it really is a convergence of compute networking and storage. And so while we're very clear about how we manage that from a BU perspective, bridging those, bridging those connections across, Darren Thomas and my group is important. So just a couple of examples of some of the people talent that we're bringing in to help continue to drive that change. Tom, hi. It's John Pitcher with Credit Suisse. I was hoping you talk a little bit more about the high velocity that you talked about driving densities within enterprising cloud. How should we think about the densities of DRAM today? Where will it go? What percent of of the market is being driven by high velocity applications. And there's a debate in the investment communities to where compute stops and storage begins. How do we think about the DRAM in that system versus the NAND in that system. As you talked about, things about big data in in memory database and things that. Yeah. Well, so the, you know, in in in terms of the, in terms of the opportunity. Here. We'll go back. You know, so this this is, you know, from from a DRAM bits perspective, looking at where the opportunity is And, you know, when you get out to, you get out to 2018, again, your, you know, you're almost twice, you know, PC in that enterprise space. If you were to look at this, for DRAM in total, this would be, again, in 2018, roughly 30% of the opportunity, being in this, you know, in this enterprise and cloud space the largest piece, actually, I think mobile, if I recall, is about 40%. And so those become the 2 big drivers. And so from a growth perspective, it, it kind of steps into 2nd place as we go forward in terms of driving driving overall bits in the industry, but, you know, behind mobile. And, again, I think a, it really is a a drive for, you know, as much density as we can, you know, put, into a single box onto a single dim, you know, at some reasonable premium. Right? And there FCR fairly healthy, you know, double digit premiums per gigabit today when you move above the the mainstream. So today that mainstream density is 16 gigabit. You move to 32. You move to 64. You're you're getting a nice you know, uptick. And, and so for us, it's about, you know, how can we, you know, how can we drive that density without taking undue, you know, cost penalties. And so we're, you know, we're currently thinking we'll do, you know, 4 high TSB stacks on 8 gigabit components which will enable 128 gigabyte dims. And we'll keep looking at, you know, hey. Is it economical to be driving you know, higher stacks, you know, the DDR4e, the next generation of technology, it looks like it's gonna get standardized at at 812and6 18 gigabit densities. And so as as, as Scott keeps doing his job on technology, you know, we'll keep driving that. You know, I I think from a you know, in terms of how to think about, you know, DRAM versus storage, I think that's a a fairly well, you know, understood, you know, division today. Obviously, there's, you know, a a steady and and very, rapidly growing shift from rotating to, you know, to SSDs in the place that's happening 1st and foremost are in these, you know, high velocity applications where where they where they value that lower, the lower latency access. You know, I my my personal sense, and and, again, I'll I'll defer to Brian to talk about this in a bit more detail, is that, you know, some of the some the exciting, you know, evolutions there are the potential to to further, you know, disintermediate that memory hierarchy, you know, where, way back when. It was just, you know, a processor, DRAM, and disk. And, you know, now we've now we've inserted, you know, a lot of layers of cache, and SSDs in in between. And some of those emerging technologies are a further, disintermediation between DRAM and and what is NAND today. And that's a a good opportunity for us, but in terms of, more about that. I'll let Brian talk about that later this morning. Tom. K. Thanks. Excuse me. Steve Fox, carth re research. Can you just talk a little bit more specifically about DDR4 and where you see it ramping as you maybe exit the year, what percentage of your, shipments it could be, what's the most obvious applications have success with. And then also compare it to where it's gonna exit maybe in 2016 as a percentage of your business. Yeah. Okay. Yeah. I, you know, so so in terms of of where it, you know, where it deploys, you know, the the the 1st place deploys is, is in the enterprise side of servers. And, you know, that that transition is underway day. And, that's going to be what drives the, the bulk of the transition this year. And so I would expect that, as we get out towards the end of the year, we won't have gotten to to half of our enterprise, to half of our server bits being there, but we'll be approaching that. The second place that that transition occurs will be in the You know, those folks have a bit more, you know, a little bit more sensitivity on the cost per bit delta there. So it's going to come. It's just going to come there a little bit later. And then as we get out into 2016, you'll start to see the client, the client platforms transition. And so as we get through 'sixteen, you know, they'll start to be a fairly significant, you know, shift over in call plant, and and certainly by the end of, of 16 on the enterprise and cloud side will be, you know, very heavily converted over. Okay. Very good. Well, I think that's what we have time for, and I'm going to turn things over to Jeff Bader, who will talk to you about our embedded business. Thanks. So, we'll fall a similar format that, that Tom did in terms of giving you an idea of what the embedded business is about, where we're focused and where we see the same growth drive 1st. So embedded for Micron means, automotive, industrial, consumer and what we call connected home And it's those application segments that we're focused on within the embedded segment. And if you think about Tom mentioned a little bit some of the, the need for longevity, and, they're they're sort of, ordered in that order as well. You know, you think about automotive applications are driving, you know, where they, where they have a very, very long longevity requirements, industrial sort of fall somewhere in between and the consumer connected home part of our portfolio ends up, tending to be, closer to leading edge technology and leading edge transitions at the same time. And when we look at what's going on for us in the space, you know, those three core markets, it's four segments, but think about it as automotive, industrial and the consumer bunch, the automotive market is just, of course, growing very rapidly today, really driven by the the amount of semiconductor and the amount of technology that's going into the automotive application, specifically the infotainment application, and a growing portion of that going into what we call, advanced driver assistance systems, or you'll see it as ADAS, on the charts. That is really the story in automotive. The the other story is the adoption leading edge technology there. So increasingly, in order to deliver the experience and deliver the the safety feature or the security feature they're trying to deliver, it's acquiring and adoption of very much leading edge technology, which is a very new thing in automotive. And we'll talk about the things we're doing to help our automotive customers adopt that new technology. In the industrial space, it's really, in essence, the internet of things, it's a big buzzword, but what do we mean by that? Essentially, you have a tremendous growth driven by the addition of connectivity and intelligence into what we're traditionally unconnected and dumb devices for lack of a better term. And that intelligence and connectivity brings with it a compute and memory problem, that we can solve, from from the micron side. And essentially, it's distributing that data throughout the network, it's part of what drives this, this tremendous cloud and server growth that Tom just talked about. But it's all of data coming from these now distributed and connected devices in automotive and industrial, and that connectivity, of course, continues over into the smart home and the applications in the consumer space. In Consumer Connected Home, it's really the 4K transition and the the, in essence, the smart TV transition, often coupled with this 4 k transition. And we see that the price points on four k are starting to reach a place where we believe, that that, that that transition is gonna happen here And, it's not only the TV part of it, but it's all of the content streams essentially that feed that TV. And so think about your set top box, what we call the over the top set top box or the Roku box, fire TV kind of boxes. So delivering that IP connectivity and streaming into the home, delivering through your set top box, all of those are now trying to go get ready and deliver a 4 k experience as well to be paired with those. And and both of those are driving, significant bandwidth and density growth. So take a look at that growth a little bit. Again, I spoke about automotive. You can see basically a 39 percent growth. So we split out the NVM, compound annual growth and the and the DRAM because we ship a fair amount both in all of these applications. The growth in the data part in the in the NVM is really driven by things like that infotainment platform, growth in map content, in navigation content, emerging sort of black box applications those are all driving a significant growth in the NAND footprint that's going into these cars today. On the DRAM side, it's similarly driven by the features that are trying to be offered within that infotainment cluster, but the other major driver of that is, as I mentioned, this ADAS system, or advanced driver assistance systems. Think of that as your lane departure warning, your forward collision warning, your your rear camera, censoring and then the the sort of what they call sensor fusion. So the integration of all of that, data in real time to make a decision on autonomous or semi autonomous driving. In industrial, I mentioned it's it's the IO build out and we think about this and again, simple sort of unconnected devices now integrating in a full Linux stack integrating in IP connectivity, integrating in the, therefore, the NAND and the DRAM footprint required to go drive that and support that. In the consumer side, it's 4K and the network around that 4K, it's the big driver that we see here, again, driving a significant growth in DRAM per, you know, per set in the the TV case going from a 1 or a 2 gig, up to a 4 to 6 to 8 gigabyte solution. And that's driving a tremendous growth there as you can see from the Cactus on the DRAM side. So now I'll dive into each one of these segments a little more. And what what what do we see as the requirements in this space and what do we do to leverage that and then to deliver solution there. So in automotive, you know, it's really about historically, mode in your automobile, you know, you you you don't want a quality issue. You don't want a reliability problem. You you have a very, very long life cycle. Tom mentioned before something, introduced about 2, 3 years ago, a program called the Product Longevity program, which was essentially a select set of product that we made available for a 10 year, form fit and function support. And that's made it very simple for our customers to adopt tech into these long life cycle applications. But the increasing change in automotive is technology. And so we've been working very closely with Scott's team, at the very upfront technology definition point to build in better support for the ultimate application in in automotive. And just to put it in perspective, DDR2 adoption in automotive was basically about 4 years after adoption in PC. The eMMC, which is the kind of our our our mainstream, managed NAND product that goes into auto mobiles, was about 2 to 3 years after adoption in cell phones and handsets. We're looking at an LP 4 adoption today in automotive. Automotive. We're gonna begin sampling, second half of this year, and we're gonna be ramping in production a year and a half ish after it ramps in cell phones. So a much quicker adoption of leading edge technologies in order to solve essentially the problem that they're trying to solve in the car today. We're very much a leader today in automotive. We're about twice the size of nearest competitor. Part of that is the long investment we put into this, building the custom solutions, building unique products and and supporting that customer base. We have a very strong portfolio across the space. And the other major thing we've been doing for the last several years. There was a question at the end of Tom's thing about how are we working with customers to to enable their innovation. We've invested said strongly in a series of, system engineering labs and customer validation labs that we've deployed now in multiple regions around the world. And the idea behind that is to engage directly with the car OEMs, to engage directly with customers to engage directly with, the tier 1 suppliers and the ecosystem and chipset suppliers. And we're working with them in these lab to do usage model exploration, new technology exploration, essentially architecture development to help them prepare for the adoption of this new technology. And that's been a huge, improvement to our relationship with car OEMs it's been a huge enabler of them being able to adopt this technology as fast as they're adopting today. In industrial, again, this is also a market that that requires longevity, but, this market in particular is a highly fragmented market. We leverage quite a bit, tools and techniques to try to extend the reach to hit into more parts in this market, right, rather than try to get to each customer alone. Right? And again, in this market, you can see some of the applications down there, just the tremendous breadth of application space. And it's sort of serving that breadth of application space, that requires, interesting and innovative ways to go reach the customer base. And I'll talk about those on the right hand side here. The other major trend, again, I mentioned connectivity and intelligence getting integrated into all of these different applications. 1 the ways that we are engaging and sort of starting, into that space is with the integration of wireless So many of these applications are integrating essentially cellular connections, and we are a leader in that portion of the market. You can see it's expected to grow from around $350,000,000 to $2,000,000,000 over the next several years, really driven by the 4g LTE build out in these applications. Most of that market today has been the 2G, 3G networks, and they're converting to 4G in l t, which is driving a much richer solution from our side. So that's moving from a sort of mid density, low density, nor based MCP solution to, 2 and 4 gigabit NAND LPDDR solution, right? So, it's a great opportunity for us to leverage sort of the position we have today and continue to grow that. I've mentioned this fragmented market. It's really a key, challenge, if you will, for reaching this market. And so our focus there is again to sort of work with a broad set of customers and a broad set of from an enablement perspective. We're invested very heavily in the last 2 years on our, our distribution network, industrial distributors that we use. Really figuring out how and where we partner with them to have them be the feet on the street and drive some of the demand creation drive some of this business opportunity for us that we can then partner and help support. But the other thing that we're working very clearly is with our chipset partners, and with the reference design partner. So there's an example here on this picture is actually the Raspberry Pi generation 2, which you may have heard. They announced about 2 weeks ago, 3 weeks ago, which has, it's a way for us to work with a customer like that who buys micron, and then that design goes off to to a number of embedded applications. And we're working with other sort of single board computer guys, and, and reference design houses that we can get a design effort that we do multiplied out into this industrial segment. It's our key sort of strategy to go reach these kinds of markets without putting a a tremendous amount of resources internally. And then consumer and connected home, I I put them on the same picture here, this is a more traditional technology lithography generation, get to the latest technology, type of market than either automobile or industrial is for us. It's about time to market. It's about early adoption and new technology And the two big, threads that we see here, I mentioned, one of them before is the 4 k build out and it's sort of a 4K and all of the other streams that need to be 4K that are touching that. And then when we, we call it smart home, this space, but in essence, I think of it as the consumer version of internet of things, in the sense that various different places in your consumer, while they're your smartphone, your smart, your wearables, your smart home gateways, they're all getting a dose of connectivity. And intelligence in order to to interconnect and to and to drive advanced features, all of those that are then in turn arriving a a richer DRAM footprint and, and in many cases, a a larger storage application as well. And so we're serving those markets as well. And, again, the the the the focus in this space, is a portfolio breadth that we have. Right? If you think about what we're offering, from, you know, very low density, hist serial through very high density, embedded to complete breadth of both legacy and leading edge DRAM products, and including in this segment as well, we see a top in here of LP, product, much like, Tom was mentioning. And and I think it's, again, driven the power requirements and the bandwidth advantage of the LP architecture. And in this space, we're also doing quite a bit in, in, essentially, custom solution multi chip modules, in many cases, are mixed memory multi chip modules. So DRAM, NAND stacks, NAND and LP stacks, and those are going into these applications that are sort of form factor driven, like in the wearable space or the action camera space and, and, some of this home automation and so on. The the the need to hit to a much smaller form factor is really a key part of what drives that, and to a lesser extent, some of the performance advantages of those those architectures. And then finally, I talked about the, this, these labs that we've built out. We have a number of them built out. You can see today, San Jose, Boise, Munich, Tokyo, and Shanghai, a few other places. And and the the focus on that is is sort of a a much deeper partnership with our customers, and the sort of influencers around our customers. And you can see some of them on this chart. So so we spend a fair amount of time today with the, for example, in automotive directly with car OEMs in the set top box business directly with the carriers. We're working with the various sort of industry consortia that are trying to establish requirements and standards in that space, both influencing what they're doing and also building upon what we're doing. And of course, we have to partner with all of the, the core chipset logic providers and SOC providers, to help them understand the technology transitions and just sort jointly work on what's the right solution going forward. And the idea behind all of that is to enable our customers on the right hand side to be able to adopt the technology, to be able to take advantage of the technology trends that we're driving, and to bring those, to market much faster than they would otherwise. And this lab infrastructure has really been a strong way that we've been able to to partner with those guys, help them solve their memory problem, help them solve their system as Tom was talking about, looking at it from the system side and to get to market faster with a more compelling solution. And I think with that, I will move to QI. Everybody wants their break. Jeff, thanks for the presentation. Can you just give us a rough I think we have a good understanding of, memory content growth per application in the various segments. Maybe can just give us a rough sense on the size of your different sub segments. Yeah. So for us, the consumer, in particular, this combined consumer connected home. You guys can do the math easier. Consumer Connected Home is by far the largest portion of the TAM. That's the largest market that we could serve followed by industrials, followed by automotive. In rough numbers, you know, automotive is is a 1,000,000,000 dollar TAM. Industrial is 2 to 3 and and, the consumer and collective is another 8 or 9 in round numbers. Our business is is is weighted actually more toward the automotive and the industrial side. The, you know, where where we can create, and deliver better value where the stickiness and and Aongevity is appreciated and valued more, and, and it's, in the consumer space, it's more back to the mix optimization point that was raised earlier operationally, we look at that and how much supply we wanna put into that, on any given quarter, based on the other opportunities that we have and the other be, be used. Hey, Jeff. It's Earl from Nomura. I see you guys have operating margins there 20 percent. So I I guess, how do we view the cost structure relative to the other groups, seeing as you're addressing solutions with the broader including SLC NAND and and NOR. So cost structure, I mean, we sort of a common set of, cost structure across the organization, you know, the product mix is very different in different organizations. And, you know, part of our mission, as part of, micron portfolio is really, as you said, that 20% or 22% operating margin is of the target to continue delivering that level on technologies that we've invested in long before. So return on capitals in essence to continue profitably driving that and leveraging that capital in long terms. As Scott was talking about, you know, the planar technology, and how much and how fast we convert that clearly for many embedded applications, we're gonna continue to to drive plainer NAND, delivery for for a long time to come. So so we intend to kind of leverage that capacity investment that we make today, you know, long into the future, which helps the cost structure in the operating margin. In one of your early slides, you put up it showed the rate of growth of NAND and DRAM in in the individual markets. And in in every case, I think, except the connected home, the DRAM was more quickly. Do you see that changing at some point in the future, just across the board for some underlying reason or or is the question really not even well placed? No. No. It's a good question. I think that there are several that are driving faster DRAM growth that that's a little it's a little seems like it won't be true in the long run. Right? Because I think as we think about the application space, you're gonna need a certain performance, certain bandwidth, and then like like your cell phone, like, you know, various other applications, it's more likely gonna be the the NAND footprint that's gonna grow fast after that. Yes. So I think that there's, it's more a function of the specific applications that we have and specific transformations that are happening in those applications today, I think. That's driving that. But you're right. Today, our DRAM is is slightly above or or above across the line, for my a growth perspective, right? But I think it's longer term, we would expect the NAND portion of that to be higher. For the presentation. Just a quick question. In the auto and the industrial space, you saw expansion by microcontroller business from Fujitsu, which then cut off by Cypress. You see companies like Microchip making more investment into embedded So can you help me understand from a competitive perspective where the integrated market sort of stops, the discrete market starts and kind of how do you compete as more and more microcontroller companies are trying to beef up their own memory, tech technology. Yeah. And there's a couple different ways to look at probably many different ways to look at both of those acquisitions, but, there's clearly market for MCU with integrated Flash, right, or and integrate, memory, DRAM for that matter or S RAM. Most of that integrated flash technology is kinda hitting a wall at 65 nanometer kind of generate And so I think all of the guys in that market today are trying to figure out how do they get to what a whatever is after 65 for them. And so I think that's part of the rationale on some of the acquisitions. A belief that that technology is a way to open up and continue to explore in those markets. You know, today, you look at the the the the density that's getting integrated, it's it's pretty small. It's pretty inefficient memory technology ultimately at the end of the day. Right? So it's a pretty low density application that's going into that space. I think there's for that? For sure. Right? I mean, obviously, I think some of the internet of things build out, is it could very likely wanna have a very, very low function, for lack of better term thin clients in IoT kind of space. And I think that's where those applications are going to go. When we look at it from our perspective, we have a much more compelling NAND Architecture And NAND Technology Advantage. We have a much more compelling DRAM and DRAM technology advantage, we have very good NOR technology that's now ramping heavily on our 45 nanometer 300 millimeter facility. And so we look at that. It's interesting but it's not our it's clearly not our only path. And I think if we're our only path, we go try to figure out what we need to go do about becoming integrated MCU guide, but I think we have so many other opportunities higher up the stack at higher function devices and higher future which which is going to drive a better return for us. We're done. Okay. We're gonna take about a 15 minute break. So we'll be back here at 10 am Mountain Time. Thanks. 1, 2, 3 kids. Thank you. Thanks. Okay. Oh, Yeah. Exactly. Exactly. Well, you do have any thought. I mean, I'll be We if we can get everyone to come in and take your seats, that'd be great. And we can, get the day progressing here, Darren Thomas from our store storage business unit will be up next. Well, while everybody is taking their seat, I have a quick poll. I'd like to ask, just coming back from break, maybe something to do something a little different. How many people, and I'm hoping that I can actually see beyond the life? How many people are using a laptop or a a desktop either in their business or in their home that still has a spinning drive in it. Can I see by short hands? Yeah. The industry's the industry average is about a little bit more than 50%. And, and and that's still a high number it's transforming very quickly. You're gonna see that. Now the good news is I I don't think I saw any micron hands go up. So I would expect them not to be doing that. But what I do have here is our brand new MX 200, 250 gig drive, This is out of our performance line. So this is the crucial performance line. I'm gonna talk about the 2 swim lanes here in a minute. But, this particular drive, we just launched it. It's it's a very unique drive. It has, it has dynamic right caching in which means that anytime it writes, if it's not real busy, it will go to a very fast writing mode. It goes to a very unique mode, and it writes much faster than the average write speed of an SSD. And it also has encryption in it, like an enterprise drive. And I also have enough for everybody in this room to have one. So when you leave the room today, I'd ask you to stop at the table with our, our support staff outside, and they'll give you one of these drives. The drive comes with the tray necessary to lock to mount it And it also comes, with the software and the tools, the software is downloadable, and there's a key in here to use the software so that you can upgrade this yourself And if, you don't feel comfortable doing that, you can call our crucial team and the award winning service department there, we'll walk you through it. So you can upgrade yourself. Now what I will warn you is you can no longer boot your drive and go get a cup of coffee and come back. It will be done by the time you get back. So do I have the slide? Who's got the projector Advancer. Okay. So, while I'm getting that, let let me just, dive into the, into the presentation and, start by saying that the the the SBU is going through a transformation. We are a a transformational time in our business. I've been here 11 months as we talked about last night, and it's been my role to do a lot of transformation. What I'm gonna, try to encourage you to see through this presentation is that, yes, it's product. We have to come out with products like this. We have a brand new M600, that is in more calls at one time than any of our previous drives were ever in in their entire life. And it's those kind of transformations that are important, but it's not just about product. It's about people it's about, having the right kind of people because now we're talking directly to customers. And and we need people like, like, Tom mentioned, Steve Pawlowski, somebody who can talk to the customer at their at their architectural level. Not not just about our NAND, but about their operating system, the problems they see, the headaches they have. So that's what's important. It's about people, and it's also about partnerships and you you I'll get to a slide in about 5 slides on our partnership. But, in in the enterprise, nobody does it alone. You need the you need the full breadth and depth of your ability to use your, your your enterprise partners to help you execute and deliver helps you with time to market. It helps you with cost. It does a lot of valuable things. And that results in performance. So it's products, people, partnerships, and then it ends up in performance. So I'm gonna encourage you to see that and try to point that out throughout the slide. This first slide I have up is really about how my market, really looks from the out side world. This is as a as the enterprise and the OEM customers, when they look at us in the SST space, this is how they look at us. They're set of enterprise class customers that carry about endurance, performance, reliability, and and and on time. The drive is never off. There's no idle time in the drive. That's that's a unique set of characteristics the enterprise industry shares. And for that, they're willing to pay more for. They use some different interfaces that are highly technical and highly capable, and and that business has a pretty good And do you see the CAGR 53 percent? Matter of fact, all the CAGRs of the first three are in the mid fifties. And, those are bit packers, by the way. And then you go to the data center. Now data center is a a lot of people think it's 8 customers is 88 data centers, you know, the Googles, the Alibaba's, the the the Amazon's, that's the data center leaders. Those are the lighthouse guys at the day center, but almost every IT department covets that business model. They covet being able to make a data center side, their own Chainlink fence that operates with that kind of scale and that kind of efficiency. So when we say data center, we're not just talking about the 8 big data center guys. We're talking about all enterprise customers who are trying to build or are building a data center class application, and they wanna buy the products that meet that. And the requirements there are they use open platforms, which means they they could that doesn't require any specific, company's hardware they don't mind buying directly from Micron. Matter of fact, that's a that's a preferred model. And, one of the big trends in this market is what we call lights out. That's the term they use. And by that, they mean when they put a data centered to get this scale and cost they're looking for, they will quite often look to make the data center be so no human goes in there. There's no human intervention. Their design is typically 6 months to a year before any human walks back in that building again. And what what that means is they have to plan ahead for growth and and failure mechanisms and all that. Well, one of the ways they plan for failure mechanisms is to remove everything mechanical. So no fans, no spinning, drives. It's a very good thing for us because we don't sell spinning drives. So the data center class people are looking high degrees of economics that are achieved at high scale. A very interesting look at the CAGR. It's the biggest one up there. And, a great opportunity for companies like Micron that sell that can sell direct. The next one is client Wyatt is the largest of these businesses by far, and, it is also probably the most, competitive market we play in. The CAGR is pretty big. What they're looking for is thin. They they they look for, the lightweight. As a matter of fact, the when you pick this box up, you might wanna look inside. It feels empty that it's pretty thin. It's pretty lightweight. And and the other thing they're looking for is this low hour, long battery life. So that's the that's the dream of the laptop, the notebook user is is it it doesn't weigh very much. It lasts it lasts all the battery life lasts a long time, and and and everybody likes the little Gucci thin ones. And so, that that's a that's an important piece. The most significant part of this trend is for us and we're the we're the we're the business class users. What's what's called the commercial user in the industry. And, we're near the tipping point. Now when we say the tipping point, this is where it kinda a, you know, there's a strong drive to to use our technology, but it's only at the highest end. And and and then there's a lot of spinning drives sold the lower end. When you hit the tipping point, they just sweep the product line. They say from now on, everything in this product line's all spinning drives. And that's applicant. So the per the price parity is not is not at price parity. It's more like two and a half x. Maybe, that's kind of the number everybody throws around industry. And as I noticed on the slide here, we're getting near that tipping point for these, these client drives. And and and on this is, this is pretty much an agreed to in the industry comments. You'd you'd get the same thing if you ask the OEMs. And then the consumer space, it's the it's probably the most different of these consumers about, people and upgrades. Like, if you're smart, you'll get those one of those SSDs. You'll put it in. You'll do an upgrade, or you can get the technical brother-in-law or member or you can get the our our team to help you walk it through it, but you've done an upgrade, and you're gonna turn a laptop that takes a minute 30 seconds to boot into one that takes about 18 seconds to boot. And we'll, my joke is used to when I loaded one of my programs, it named every person who ever wrote the code, it named all the writers of the code. You could read their names as it's loading. You can't read their names anymore. That that that's one of the advantages. Well, that's consumer market and, in the SSD space. And, it's a, a lower tagger, mostly because as the new dry as the news notebooks come out with an SSD, there's less room to upgrade. So you can imagine that market would start to slow down. Wanna talk a little bit about the trends in the industry. And and, the key is is I'm I'm gonna go around this from the upper left hand corner around the circle here. The enterprise space, if you look, there's 3 different interfaces. Now I wanna point out that we talk about interfaces because in the industry, those of us that live in this industry, the interface means a behavior, means a workload to us. It it's not about the interface. When we say when somebody says SADA or somebody says PCI to something like me, when they say SADA, they're saying a server based storage device that's used by low cost servers. That's what that means to me. When they say, PCIe, they're talking about a very high performing workloads, something like oil and gas or genomics or, you know, military application or something like that very high speed, high database, something like the, you know, database acceleration. When you say SaaS, you're talking about external storage. And so the interface is call them by the interfaces, but they have very different workload behaviors behind them. Now what I wanna point out here is that the PCI interface in enterprise is growing. It's actually the fast is growing. You can't sell it from that chart, but it grows at a 100% year over year through 2018. And it gets up to about 30 6% of the total. The more interesting one is SAS. SAS was always a bigger, technology. And now as the external storage companies are beginning to, adopt and deploy SSDs in their applications, you're seeing the SaaS market kinda grow. Now it's growing 72%. But by the end of this 2018, it's 45% of the market. So that's why SAS is so important to us. This is a fast growing industry from a big base, and there are lots of SaaS drive applications out there. Coming around the data center, there's there's really There's there's really 3 there, but if you notice SAS at the bottom, it's very small data center. Folks really are after a more cost effective deal. They're typically not the highest performance model. So you see that very, very thin blue line at the bottom. So most of it's PCIe and SADA. And what you're seeing here is the data center guys, are are still very strong in SADA because it's a very cost effective design for them, and they put a lot of the redundancy and reliability up in the application layer so they can use what is almost a client class drive. The difference here is it's always on. The client class drives have an 8 hour duty cycle. These drives have a 100 you know, it's 24 by 365, 8 hours a day duty cycle. So the drives are beefed up a bit, but, basically, SADA drive meets their requirements. And you're seeing some PCIe coming in, but this is not the PCIe from enterprise. This PACE is the low cost version. The the the server, processor will come with PCI interfaces and enough of them in the next generation sensors that you'll see you'll begin to see architects putting the PCIe drive directly on that PCIe bus the processor. And so what you're seeing here is really a cost savings move to go to pcie. And then that same thing occurs in client. The client drives not looking for that screening performance. The enterprise guy does in PCIe. What they're looking for is that cost savings in PCIe. And so you notice it changes. What I wanna point out is it looks like it's a big number in 2015, but that's mostly one supplier. It's mostly one one vendor out there. But by 2016, Intel will have processor chips that have lots of PCIe interfaces, and we can put drives directly on them And, by 2016, you'll see the the rank and file OEM starting to move to, to the PCIe bus it's gonna be a trans transition. It's not gonna happen overnight. So this is not something that, you know, the big notebook guys are gonna just redo their whole line just to just to do this. But as they update their lines, they're gonna go to pcie. And then you see the consumer side PCIe kind of follows. The consumer side is really, the same as the client side just lagging a little bit because it is the upgrade market. So the trends roughly here are what I would walk away from this is, Santa is still big. And, you know, as as we walk through this, SaaS is enormous in in the enterprise space. And, PCIe is coming and is coming on strong. That's the kind of the walk away message from this slide. Now I wanted to talk technology, and, I'm I'm gonna I'm gonna limit myself to referring to these by products. If you wanna ask a lot of technology questions, and I'm gonna have him re mic, Scott up because this is really his area. But what I wanna point to you is a go to market standpoint, and and this is looking at SSDs. If you look at the light blue space on the left, what you'll see is before 2015, and and and I'll make a point. This is the transitions as Micron is doing them. So if if I haven't put TLC as soon as you think it the market. That's because we were coming out with it later this year, so this is where I showed it. But if you look on the left hand side, you'll see that MLC met every piece of the market. It met it met this consumer retail market. It met the client in its day. It was priced effectively. You'll see the data center still, need the MLC for the Endurance and cable. You know, it met the enterprise space. So it was a a it was a universal technology that fit across the the the product set. And I think some of the questions you're you the team was asking, Scott was about, you know, why why why is the change in TLC and Planer? Well, what I've shown is light blue space in the middle. This is planar TLC, and this is looking at it from the way a market person looks at it. And and what you'll notice is planar TLC is is dominating in the in the retail space. So it obviously, you know, it's driving the prices and it's affecting, every company, including us. And if you look at it in the client space, you'll see the same thing. Planer t I mean, the Planer TLC is a good product. It's good enough. But from the description I just gave you on the previous slides, the client space doesn't have the same performance requirements. They don't have the same endurance requirements. They don't have the same, you know, any any none of the requirements are the same. So TLC works fine there. But if you'll notice, we have no intention on moving them up into data center. They might make it into the low end of data center, maybe a less than one fill per drive because the data center sometimes buys client drives. But for the true data center, the one that's always on, never off, lights out, doing what we talked about. TLC, the 16 nanometer, and below TLC just isn't good enough. The the the the we can't reach the fills per day. The endurance the customer wants or the performance they want with that drive. And, of course, the enterprise is a step function above that, and it certainly doesn't need it. So what I'm showing you here is in a way to think of Micron's desire and and and insight years ago was at 3 dtlc. If you'll notice 3 dtlc meets it all again. So that's the technology that at least in my SSD space gives me product I need to be, to be competitive in the market across all of these technology. He gives me the price. You know, he he shows you how it's a better price than than, than 16 nanometer. It's it's way better performance and way better endurance. Mean, significantly better. And so, the the 3 d is is the Micron Future for SSD. It's the industry future for SSD. And, I and I would leave it to Scott to explain to you all the technology, but I'm just showing you that this transition has fallen this pattern, and it will continue to follow this pattern. So the technology deployment for us is we will have a planar TLC in our mid range and the low the low end technologies. But as we go up, we can't deploy them there, and we can deploy the 3 d TLC. Very cost effective with the performance that the customers need. So now just kinda walking through this. I I wanna walk you through a little bit of this transformation I was talking about when I opened this So no excuses. We are a product company. We have to have a a SSD portfolio, and we have to have technology. And and I think Scott's, you know, done a great job of showing where our technology's coming and and, you know, some of this is you you just gotta sit back and watch and and and watch these things happen. But, we we have to have a great tech, great portfolio. Now I wanna talk about expanding the enterprise capabilities because, the the portfolio is absolutely on its on its men. We're we're improving it. This drive the MX 600 a lot of technology, the announcement we did last night, the, we got next generation drives coming all the rest of this year. So there's lots of, lots lots of product here. But to make that product really reach our end user customer, we have to we have to kinda change a things about us. So first of all, we've hired an enterprise or we've expanded our enterprise class sales team. We hired a gentleman and Mark Adams showed you his name Mark Glasgow. Mark Glasgow comes from the enterprise sales side. He's a true enterprise salesperson who is used to talking to, you know, 25, 50 customers a week and and sits and and and and and I'm talking end user customers. This is not this is not negotiating seeing with, with one of our OEMs, this is actually learning from the end user customer what they want, what they need, explaining to them our portfolios so that they can buy our technology either directly from us or through our OEM partners. So, so we just really expanded that team The team had a big kickoff earlier this week, and, that that team is now up and running. We also hired, like, Tom Ebi hired, Steve Pulaski, you you might have noticed an announcement from us. We hired a gentleman named Rob Pegler. Rob Pegler was a c at EMC, and he's now on board with us also started earlier this week. And, Rob is one of those people that can sit in our customer and talk about their software, their applications, their OSs, their networking, all the way down to anything they wanna talk about, all the way down to how to drive can make a difference in those in those applications. And, as you might note, and when you talk about this class of peoples, Steve Pawowski and Rob Pickler, kind of same class and and they're birds of a feather. These these two gentlemen will spend a lot of time with each other, and we're adding to that capability. But that's another example Street. We put qual teams in countries in countries where we sell these products. So at one time, most of our qual teams were here in the US. We now have qual teams overseas where the products buy where the customers buy these products, where they deploy these products. And so we've expanded that. We've expanded our firmware team and we've expanded, even my marketing team to include people that know how to talk to end user customers and and have that access to to that process. So expanding the capabilities, strategic partnerships, I think that I'm gonna I'm gonna mention the Seagate relationship. We talked about it a little bit last night, but I'm a I've got a slide following this where I'll kinda roll out for you again. But that's not the only strategic partnership. We've had one with Intel for a long time, and and it's not the only one we're gonna do. So I'm just gonna say stay tuned. We're gonna continue to do these strategic partnerships because I can do more with the enterprise class partners, to get we can do more than we can by ourselves. And 1 in 1 is 3, and we'll both make more money doing these things. So strategic strategic partner shifts are are a very important part of our business. 3 d NAND, I I've shown you the slide. It's it's our future. When we get to 3 d NAND, we we will have the product we need. We'll have the technology we need. And and and it just keeps getting better after that because, we have next generation memories, which I'm not gonna talk about, but just point to the fact that these things are on the horizon and my team knows how to deploy them and use them with our customer and our customer relationship. And then finally, something people don't ask a lot of questions about is software. Now I'm not talking about firmware. I'm talking about software. Software means our ability to in Fluenced VMware or or Microsoft or Oracle or SAP or something like that. I hired a soft vice president. I think I mentioned it at the, at the, Hong Kong, team announcement when we did the investor meeting there. He's been busy at work. We actually opened an Austin Design Center. So there is now we have an office complex with, over a dozen people in their writing code, communicating with our partners, making sure that if there's a way that can use our systems better. We're working on that way with them. So we're partnering with them at a software level. A very important piece of a of a of a transformation for Micron to have disability to make the world better and for us to dictate how it gets better. And so here's the, the Micron Seagate announcements. As we've mentioned, it's a multi year agreement. The way you should think about that is the enterprise customers wanna know that that we're not speed dating. This is a real deal. And and so it is a real deal. It combines the innovation and technical expertise of the 2 industry giants. Seagate's got some excellent technology, not the least of which is their SaaS technology would stay invented, and now we have access to that. Micron has not only, you know, 16 nanometer NAND. Our first product coming will be a 16 nanometer NAND product. So we're gonna have an enterprise class 16 nanometer product. And and so that's the kind of innovation that we can do together. The initial collaboration is the SaaS drive, but it's not limited to that. The the that was just the one I was the most excited to have. Seagate has a lot other technologies. They have flash arrays or or all flash arrays. They have many other technologies that we might be interested and we have technologies they'll be interested in. And then, it gives us, you obviously we have this access to to their, drive and their other technologies, but they have access to our strategic them in the in the fold and saying, look. We're we're gonna make sure that if you forecast and and manage your inputs to us properly, we will make sure you get your volumes. And, and then it establishes a framework for future collaboration. So this is a real enterprise class partnership, and it will result in real measure inputs and real measurable, results for Micron. And and and I'm not gonna go into any more detail that, but it to tell you that that this is what we've been missing. I wasn't playing in that big blue space that was the one of the most lucrative s SD places and and in a very short period of time, I'm gonna be playing there. And and that's that's very important. So in closing, my intent is, you know, I I I I I needed to build a world class team and and together with, Tom's team and my team and and and Mike Rayfield's team and we're we're combining the best of the best together, and we're building and hiring these resources and we're sharing them. So just a, you know, Steve Plowski, I use him probably half as much as Tom does, but my team uses him because he's an expert at these kind of things. And like likewise with, Rob pegler. So the world class team's important. Innovation, we're we're gonna continue innovating, and we're gonna start taking advantage of partnering and and partner innovation and then engaging. We're gonna engage with end user customers. It's an important part of our information, and it's what will make Micron be seen as a a technology leader, not just by our OEMs and the people who buy the technology from us today, but by the end users, the people who use that technology, and that will redefine the future. So q and a. Let's see I think I'd I'd beat the microphone people. Thanks for the presentation. Maybe could you talk about what is your revenue, enterprise SSD revenue right now, what you target for over the next 2, 3 years? Yeah, I you know, what I don't wanna do is get into a prediction because the revenues are are are gonna be driven by a lot of factors other than just me doing this. I I think do do we do we do we split them up? So we don't we don't break out our enterprise revenue specifically. What we've said is SSD overall is about, 20, 25% of our trade NAND revenue overall. And I think on our earnings call, we said roughly 25% to 30% of that is is enterprise today. Yeah. Yeah. And and and by the way, that's not enough. Right? That that's the key. It's that's not enough. And and and we wanna raise that because the larger the enterprise revenue, the larger the margins will be. Back. You mentioned, last night that both you and Seagate will be able to take this this asset SSD to to market to your individual customers. Are are you able to start selling their the the this SSST that they already have immediately, or is this you're gonna develop this SSST, and it's gonna take some time to get that qualified Yeah. It's a great question, and I actually asked that question. And, I I I'm not sure anybody would have allowed me to sell that drive before but, no, it it turns out that the drive had a very limited volume to it, and and I just I couldn't get I couldn't get any volume no, no, we're not gonna drive sell the current drive. We are gonna wait and and sell the new drive, which is gonna come out in a a very short period of time. Let's see. Hi, thanks. Mark Newman from Bernstein. As you talk about building out your capabilities, and hiring people in the enterprise side. Are you are you thinking about actually building out an enterprise Salesforce and selling directly to enterprise customers, or do you think your main channel is gonna be through the incumbent OEMs, EMC, etcetera? And then the follow on question related to that is could you comment on between enterprise and data center the profitability in in in those segments. Okay. Let me take the first one first. It's a great question, and I'm glad you asked I I did wanna leave you thinking that we're gonna go hire 15,000 salespeople and and and and and try to, you know, be in every market and every place. That's not the model. The model is really very small what what what in the industry is referred to as sell with teams. So so we would go to customers, and we we'd partner with somebody like a Dell or an HP, and they might bring us a a deal and say micron, you know, you you you might wanna talk to this customer with it'll be a very focused and targeted approach with our OEM partners, and and they will help guide us on that. We may find some deals and bring it to them. But, that is not gonna be a sweeping across the the world or nation kinda deal. The team will focus on the the the data center class account. So the team does have the responsibility to focus on the data center, much smaller group of people. So I would expect this team to be relatively small. It's not gonna be the size of our our our our regular sales team, but it's gonna be very focused on selling enterprise class. So go you go sell to date acceleration guys or you go sell to oil and gas guys. You might hit all 10 of the oil and gas companies or maybe 5 or 6 go sell with some Oracle partners or something like that. So it'd be very it'll be very targeted. The second question, when I headed my a second ago. Oh, profitability. Yeah. There's I I guess I could say there is a difference in profit ability. It's better. It's it's still better than client, but it's not as good as enterprise because those customers are willing to accept a less robust drive. So the the profit visibility is is somewhere between the 2. This is probably the best way to say it. Hi, Carl Ackerman from County Company. Just curious, what are your thoughts on some of the larger hyperscale customers, you know, buying raw raw and nand, making their own drive. I guess, do you see this altering the competitive balance, you know, for drive suppliers, you know, and what's your maybe your thoughts on if others pursue this integrated hardware strategy on, your partnership over time. I'm not sure I heard the last question clearly. What it was about the partnership, but if other competitors hyper other hyperscale customers as they adopt this new enterprise, you know, SSD product. Okay. So I'm gonna take the last one first. Well, make sure I understand. The you're asking if the if the enterprise customers are gonna ups the difference between the two drives and will care? No. No. As as as soon as the hyperscale customers as they if they adopt more towards just buying raw and NAND, how that impact. Okay. So the the first of all, the hyperscale customers, there are a group of them, a very small number that can actually build their own SSDs. They have enormous engineering team and they wanna buy NAND, and we're happy to sell them NAND. That that that group of people is, 1st of all, it's a small number. Not everybody can build their own SSD. It's not a It's not a super simple task. There are advantages to doing it yourself. You can change form factors. You can do all kinds of different things like that. So we're certainly in support of that, and and there will be a set of data center class, the, if you will, the the the top, maybe 4 or 5 that can do that. But most most companies don't that capability. They just don't have the ability to do that integration. So they will buy from us and, the, and and the, in enterprise relationship that we have with, with with Seagate now will allow us to sell in any in any one of those categories. If somebody wants to buy the drive from us, including data center, or we can sell the, or we can sell the chips. What we what we the relationship does not cover us breaking the drive apart and selling the parts. So Hey, Darren. It's Erl Hagee from Nomura. I guess, how do we look at the moving piece related to software firmware and controller technology as a result of the Seagate Partnership. And, what's your strategy there? The hardware and software that again? It was a software firmware and, just the controller technology. Yeah. We we we didn't we actually didn't break it apart by components. What what what I'm really doing is I have a relationship with them to buy the same drive coming off that ODM factory that they have. They will still manage the software, the the the firmware, the the controller. My team will buy those same parts and build the same drive. So, in a way, they're my engineering team. They're they're the team that's responsible for making sure that the components all work together, and that's all part of the relationship. And, and what we're doing is is just really acquiring the drive like you could just go buy it from a a an an ODM manufacturer. So it's a great it's much simpler than me having to go buy, you know, individual and then try to manage firmware software hardware. It is, the intent is that the products are identical. Now this is key. The products are identical so that when I go to an OEM, if they wanna get 2 drives for the price of one call, they can do that because the drives are identical. You other other than the fact our name is written in ours and has a different label on it, their name is written in theirs has a different label on it. The drives are identical. Electrically, they're identical and firmware wise or identical. Darren, we got one more question over here. Okay? Hi, Darren. Hi. What's your view on the, using an SSD that talks directly to the memory bus on a CPU for higher performance? What what size market do you think that is? Yeah. It's interesting. That that market's kinda got, you know, 2 flavors to it. One is where that the I know you're talking about the dim sockets where you put them on a dim. And there's there's there's people who put them on a dim and talk to it in a memory semantic. And and so it still looks like DRAM and And and that technology is interesting to us. The CMBU team is actually looking at those products and that that that that's actually pretty interesting. It actually ends up being like a power saving version of a of of a DRAM or a a power loss protected version of DRAM. The other one is where you actually changed the semantics. So you now actually you're you put it in a DRAM socket, but it looks like a it looks like a SATA drive. There's a few industries where that's interesting. Some of the and it's really kind of the older industries where they've used older software and been apt by the older software. And if you do that, the system gets a little bit faster, and it it's a lot faster. And, it's helpful. The problem is there's very few dim sockets and customers are actually putting DRAM in those dim sockets. So we're not gonna it's not like if they have, you know, 36 dim sockets, I have 36 hard drives. The customer's gonna use 20 of those for DRAM them. And so I have 16. And 16, and these cards are very small. And so the problem is it's just not a scalable answer for many customers. It's good for, you know, immediate, almost, like, soup somewhere between, DRAM and and storage, like, for many storage elleration. It's good for that, but it's very limited in scope to people who can fit in that in that, size capacity. So the market hasn't shown itself to be very big. There's a there's been a few startup companies that have tried it. We're certainly supportive of them. And when they do it, we're we talk to them. It's part of my we we would absolutely look at them. And if the market gets big, we we certainly wanna participate. But right now, I think the market is, still TBD. I've I've gotta see the market more. There's been a few very interesting applications, but they're almost niche. So, we're looking to we're looking to for that market to get a little bit before we before we commit. And is that it? Okay. Yeah. You're you're standing right under the 2 lights. Alright. Well, thank you. And, I'd like to introduce Mike Rayfield, our our mobile VP, and I'll I'll give you the control Mike. Thanks, Darren. So, I really appreciate you all participating in my Twice annual public performance review. This is, I actually get a lot out of the conversations we have over the 2 days, I learned probably as much as you do. So today, we're going to talk about 3 things. And it's pretty consistent with the last four times that we've spoken. We're gonna talk about the amazing, sort of acceleration and capability of mobile devices seen over the last couple of years and where we think that's going, the capability, both in terms of memory content and, and storage content. We're gonna talk about a that's happened in storage and mobile devices. If, if you've looked at the the, NAND business in mobile, a year ago, year and a half ago in forecasted it, it was all something called eMMC, which is basically controller in NAND. Well, on the sort of journey to a $2,000,000,000 phone business, people wanted to get to production much quicker, have a simpler, assembly process. And what was born was high complexity EMCPs. The reason that's important, and we'll go into more details on it, but it requires NAND D RAM, firmware, packaging technology, and, and controller technology. And there's very few people that have all those. So been a huge opportunity for us. And then the last one is we've talked a lot about the last couple of years about how do we go to market? How do you go to market and be successful in a market where it's a very concentrated customer base. And all along, we've talked about it's actually becoming less concentrated. There's innovation happening in areas where it isn't as concentrated. And sort of the the rise of what used to be China inc and is now a bunch of named customers that are really doing some pretty significant innovation. So let's start if you'd looked at this slide a year and a half ago, it would have actually had some different categories. So, you know, we put high end and mid range smartphones together. And the reason we do it quite frankly, the whole mid range of the smartphone market is now exceeding in the mid range of the market is significantly higher than the high end in both in both DRAM now and there ultimately will be a NAND. If you look, the next box not that long ago would have been feature phones. But the reality with things like Android 1, we've smartphones at under a $100 that have a gigabit of DRAM that have fundamentally started to completely repay replace feature phones, and feature phones had very little memory no. And that's ultimately what drove, you know, 550,000,000 smartphone units being sold in the fourth quarter of last year is because you could buy much for so little and the differentiation a lot of times was in memory content. Tablets architecturally exactly the same as, as, as as smartphones. As we ended up with 6 inch displays on smartphones, it's hard to tell a 6 inch smartphone from a 7 inch tablet. And so I think going forward, we're gonna start to see those things merge. Again, architecturally the same. They're gonna care about the same things. They're gonna have the same applications. They're gonna need the same experience, which historically was better than a tablet, so it's gonna force a better experience the phone. And I think that'll continue to be a great driving force. And then the last thing is smartwatch is it's sort of been parsed out of what this internet of things was. And the reality is it's now adopted the smartphone architecture. And we'll talk a little later, but the on a memory, the performance necessary on that device has turned pretty significant. It's gonna be helped with your tablet or your phone, but the things you expected out are far beyond what what, I think people would have even thought a little while ago, and and that's driving significant memory content as well. Let's see if we can quantify a little of this. When we talked, in August of 2013, we had this conference in New York. And, I mentioned this little company that almost nobody had heard of, this little company called Xiaomi, and they had this phone called the red rice and nobody had really known much about it. It was about a $200 phone. It had 2 gigabytes of DRAM. They had a 3 gigabyte option coming, and it was a couple hundred bucks. And I said, you know, this is what's gonna change what happens on our end because it's really amazing product. And, you know, the last 2 years, I think everybody's heard of the company. And, ultimately, there's bunch more companies like that driving the content. If you look here in 2014, high end phones, which include that mid range is about a billion units. And, again, the the units that are that are, in the mid range of $300 phones are the highest functionality. My main phone is a Chinese phone. It's $349. It's got 4 gig of DRAM of of NAND and 3 gig of DRAM. It's an amazing device. This is the kind of stuff that's getting put out and driving the we're seeing fundamentally, we thought as I'm toading it maybe 4 gig, but the reality is we saw phones introduced just this year at CES that had 4 gig of DRAM in it. And if you start thinking about grow. If you look at tablets, it sits almost on top of smartphones as we talked about. And, again, because you're gonna want the same kind of experience, we're gonna see those in the neighborhood of 4 gig pretty quickly, and we're already seeing some. Asus, Xiaomi, a bunch of folks have announced 4 gig phones already, with both LP 3 and LP And then smartwatches, I think if I'd quiz people before this and said, how much memory, DRAM is gonna be in a smartwatch in a couple years, people would have come up with, you know, 256512. The reality, it's gonna be a gig moving to 2 gig. And a lot of it is you may capture some video. You may wanna display some video. You may wanna compress some things. You may wanna frame buffers, but gonna drive a couple things that's gonna drive packaging technology. So it's gonna be EMCP. So you're gonna need vans. You're gonna need DRAM. You're gonna need controllers. You're gonna need we're gonna be packaging and it's gonna derive staggering numbers. So this is gonna be, I think, we see a growth driver that many people hadn't thought through, in the past. Let's talk about this this this shift in in managed memory in NAND. Again, you know, two and a half years ago, we sort of started on this journey of, of, building up our mobile business, almost all of the NAND in phones was eMMC. And we didn't have a very good in it. We started investing aggressively in firmware. We've invested aggressively working with partners on controllers, but, there were five people or six people that we competed with. In this shift to try to get phones to market more rapidly, we have seen all of a sudden significant the higher density EMCPs. And, as you'll see from the chart here, it used to be 2 gig and 4 gig. We're seeing configurations now to go up to 32 gigahertz of NAND and 24 gigabits of DRAM on any MCP. And so So if you've got NAND, if you've got DRAM, if you've got controllers, firmware, and packaging, the opportunity is significant. And I honestly believe This talks about 60% of the market. The opportunity exists to be greater than that. And so it's put us in an in a, in an outstanding position and you combine that with sort of the focus, on many of the emerging markets and the emerging customers where they drive this. It's been a great opportunity for us. And you've seen the data on our conference calls about how how rapidly the mobile NAND business is growing. This is what's driving it. So let's talk about we talked about the markets and why we're interested about them, but in the end, it's about solutions, right? How do we differentiate shape. So so, LP 3 and LP 4, making sure we get, you know, we've we've ramped 25 nanometer. Qualified it in in the majority of our customers. As Scott talked about, we're ramping 20 nanometer. That allows us to build 6 gig and 8 gig LP3 and LP4 part and why that's more important now than it ever was, it allows you to build a 3 gig phone and a 4 gig phone on a 3 on a four stack part. So working aggressively to to make sure we've got, what we think will be the best technology in there and the best packaging to be able to go off and and serve that upper end. All the firm that we've invested in, the team that we've grown, there's two places where you use that firmware. Use it in EMMC. So when we talk about that end, which was supposed to be the whole market, but it's the high end. Now we still service that. We can go off and and partner with our with our high end customers to sell that. But the the 60% of the market and the one that is the most exciting to me is that same firmware and controller work goes into the EMCPs. And we've got LP2, LP3, p 4 EMCPs that you will see in phones that are ranging anywhere from $70 $350 and the functionality of what's now an $800 phone. And that's the shift, I think, that that all of us thought com, I think it's come significantly faster than we had anticipated, and it plays very, you know, plays, well for Micron. Watch as we talk about this, it's about power. It's about energy efficiency. It's about a small package. And, again, you're gonna people are gonna put as much as possible into that package. What we'll end up with is there'll be a small processor. On top of that processor, we'll be what's called an EPOP, which is basically an EMCP mounted on the top of the processor, and that's all that's gonna be in the device. There'll be a there'll be a, a PMIC and a radio off to the side. And we think that all the work we've done in the EMCPs, allowing us to learn a lot about EPAP allowance to learn a lot about packaging and the work that Scott's done is gonna put us in a great position as this goes off and grows. This is gonna grow with a big phone, guys. Right? Ultimately, you're gonna tie these things to your device. It's gonna make you you're gonna you're gonna find a watch you like, and it's gonna make you like your even more, your tablet even more, they're gonna go together. And we think it's gonna be a pretty, amazing, you know, accessory market, if you will, that'll drive significant content. Historically, people thought it'd be relatively low. I think both DRAM and NAND content will be pretty exciting. When we talk about go to market, you know, the I think when, when I started speaking with you a couple of years So it was all about the high end is isn't growing anymore. There's only 2 customers in the world, and that's that's gonna be a challenge for you, Mike. You're not gonna figure that out. And the reality is where innovations coming from now is is not only the big guys, They build amazing devices, but but innovation and functionality and price performance out of China. The largest cell phone supplier in India. Most of you have never had never heard of until very recently. Right? MicroMax. Working with people like that, working with the people that do the giftsets for those folks. All those small guys down in the corner, there's there's Indonesia's largest handset guy Brazil's largest handset guy, India's largest handset guy, finding a way to work with those folks and come up with solutions that allow them to get to market very rapidly will ultimately become larger and larger percentage of our business. A year ago, when we talked about China, it was China Inc. And now there's 6 or 8 named people. You all know many of them approaching 5% of the market. And so having relationships with those folks, doing products for them, and ultimately having them drive this EMCP business of ours, has been a, a great opportunity for us. And I think how we looked at the market made a big difference in that. And then finally, I think I think while this is a 2,000,000,000 unit a year opportunity, I honestly believe, and the thing that excites me most about it is is mobile computing is sort of just starting. It's the 1st market in the history of the world, I think, that you can call the 2,000,000,000 unit opportunity to start. But if you think about tightly coupled SOC architectures with a combination of NAND and DRAM together, either mounted on top of them or very close to them, that architecture is gonna go to all these adjacent markets. And so I think all the learning we do here is gonna continue to go further and further. I think as as you get flexible display, you get wanting a greater level of functionality, but no compromising the experience on things around your house, things around your wrist, things you wear, it's gonna it's gonna this development we've done and, and, again, probably dwarf the current market. I think that's what's so exciting about this is is we're just starting and it's, it's already pretty big and exciting. So with that, any questions? Thanks for the presentation, Mike. This is Harlan Sur from JP Morgan. Good to see you moving up the value chain, looking at, emmc, EMCP. One of the issues, obviously, with with the module based solution is, you know, you're purchasing merchant controller solution. You've got the overhead of the module manufacturing. So help us understand how Micron is sort of optimizing that overhead to, you know, continue to drive pretty good gross margin profile in those, module based products. Great question. So I think I look at the list of things where I can be most impactful. And firmware, you know, building the right NAND and right DRAM, obviously. Differentiation through firmware is critical. And then as my business grows, I'm a pretty significant, partner to it's announced Pfizer is my is my memories or my controller supplier now. And I'm significant enough that I can work with them and say, hey. I'd like you to make this. This is what I want. So I get the leverage of basically a semi custom controller. They have the to go off and do it. And I can focus on things where I think it's a higher order bit in terms of accomplishing things, and that's firmware on my own based technology and then packaging. So out in time, will I do some of my own controllers where I think I can differentiate sure? But right now, you know, over the last 2 years, it's been, let's get get our footprint marketplace, get the design wins, make ourselves successful, and then be able to often differentiate in other areas later. Thanks. So Sandy Balaji for Jefferies. Can you talk a little bit about timing of LPDDR4 as well as 3 d NAND in mobile, both for Micron as well as for the Thanks. Sure. So, so LPDDR4, we've got parts that our customers were internally qualified. We're we're we're ready to ship to customers when when they start to ramp. The the LP DDR4 DDR3 trade off is gonna be an economic one. Right? I wanna the design wins, it's deciding when to ramp that hard. When it's a significant percentage of the market, which is probably, next year sometime, I'll have my fair share that, and we'll have we will have sort of traded off, to make sure economically it's the right thing to do. So I'm pretty comfortable with where we are on that and, and and I I like the ability to trade off those things. LPD DR3 is gonna be the workhorse for mobile for a long, long time. And, and so we're gonna go off and make sure we've got the most cost effective solutions there as well. In terms of, 3 d, I think 3 d is gonna bring great, things to to, to mobile, both both, some MLC, but also TLC. And so as soon as as Scott's got that ready to roll, we'll end up, utilizing that. You know, the the the impact of Apple introducing really high density NAND devices is gonna, is gonna go everywhere. And I think three d will help us get to devices pretty quickly, because it's gonna be available and you're gonna wanna store all your data and 3 d is gonna help us get there with great performance. Two question. On the mobile side, do you see any risk to increase content as operating system improves. Is there any anything out there that, you see slowing down the increasing increasing the content. And then second question, when do you see the app processor and LPDDR integrated into the single package? Is it more of a next year or is it maybe a couple of more years out? So in terms efficiencies of, of operating systems and things. I think the reality of you look at what you do in DRAM, so, display resolutions, multiple frame buffers, textures and graphics, all of those chew up a staggering amount of DRAM. Now I'm gonna add, a couple of four k frames as I play my small four k, a video clip There's really nothing that forces us to go in the other direction that I can see. And, if you start to, I've talked to a number of you know, you wanna play with all these different phones. The minute you see a stutter as you move around, from screen to screen, you almost get this visceral reaction. You don't want And, ultimately, a phone guy's got one shot at you, and they wanna make sure it's as good and immersive experience as possible. They're gonna provision the memory because it's the simplest thing to make sure you get great performance. Oh, the next piece. So in package memory. So I think, think memory is gonna get closer and closer to the processor, whether it ends up, sort of a modified pop or epop, whether it up, with some sort of unique high speed interfaces. I think there'll be some people to put a small amount in the package out in time. But it's gonna be driven by, does it really make a performance difference? Do you see that more of a next year or more of 2017. I think it's beyond next year. It's beyond next year. Yeah. Because, ideally, what you do is you'd repartition what the memory storage in base looks like when you do that, and there's there's only thought process on how that works now. It's a ways out. Mike, thanks for the presentation. A couple of questions on applications that could drive more tenant memory into phones. The first, I know it's too early, but any update on four k camera modules actually getting into the phone and people making four k pictures, videos on the phone itself is content creation. And then the second one, I think Apple just recently upped the maximum gigabyte per application allowed from 2 to 4, in their App Store. And so I'm trying to figure out what does that mean for DRAM? What does that mean from NAND from your perspective. That's true. Well, so I think, I don't know the schedule, the 4 k sensors. I do know that 4 k content being streamed now. And people don't wanna they're gonna wanna start looking at it. I was laughed out of the building 7 years ago when I said 7 20 would be on phones. And now it's 4 k is gonna be here. So, clearly larger applications help in terms of the the, you know, requirements for more memory. I think the I think the thing that is unique is I think the NAND content is really start to go up pretty dramatically. Now that you can get very large EMCPs, you know, I've got a case study of 1. I've got a sixteen year old daughter who's never had than 8 bites of NAND free on her phone no matter what it is. And those are the people that are gonna go off and buy the phones for the next 40 years. Right? They don't know what's on it. They don't care what's on it, but they want more of it. And, I really think, as people get larger and larger devices, remember wise. N DRAM wise, the performance is gonna get better. And, NAND wise, you're just gonna feel better about having their content. Last question over here, Okay. Thanks, Mike. Doug Friedman, RBC. When I look at the target market for you, you have for areas where you highlight in your slides, you've got Apple and Samsung on one side and then your other markets. Can you talk about your opportunity at Apple, you're presently selling them DRAM, but real I don't believe you're presently supplying any NAND to them. Is that something that you target? And then how should we think about your relationship with Samsung and what you might hope to do there as well given that they do have, you know, their own internal offerings? So my objective is to be a great supplier to everyone in the phone business. And I and I spend time with, I think, every phone company on planet, and there's there's a lot of them. I think that, that as I get better and better solutions that solve their problem, that do a better job or support them. Anybody better. I think I'll I'll be a better supplier and have a better chance doing business. So I don't have I have no problem doing business with my competitors. I have no problem doing business with the largest guys and the smallest guys. In terms of in terms of, you know, what their plans are, everybody's everybody sort of architects their phones differently does their own things, and those decisions will either allow me to be a a big supplier or not. But so far, I think that the market is pretty happy with the portfolio we've got we'll continue, I think, to make it better and better so that hopefully they all call and wanna do business with us in in all the different areas. That I avoid talking about a direct customer well enough? It's hard. Is that it? Great. Appreciate it. Thank you. Okay. Good morning to everyone. I'm gonna I'm gonna do a wrap here of the the business unit, talk a bit about the innovation opportunities across, across this entire spec them. And, then we'll go ahead and we'll talk a bit about the model, and then turn it over to Mark Durkin, as well as Mark Adams for a final wrap and Q and a. So so, hopefully, this morning, what you got a a a feeling for across All four of these business units, is really just the staggering potential in front of the existing product portfolio. How excited we are about the opportunities driving that, both from the demand side, the innovation side and and really, where memory and storage are adding value out into the industry today in ways that they never have. Frankly, what we feel is is even more exciting than that is that, behind really this, call it, this this first layer of of innovation, there is a sea change happening to the across the entire landscape, top to bottom. And the opportunities that that opens up and how we think about those the role that memory can play there is is frankly, I'd say probably the most exciting thing in front of the company today. It's a, set of opportunities that I I think we're just scratching the surface out. So I want to talk about a few of those and what we're doing about it specifically, a bit segment by segment by segment segment here, you heard Jeff Bader, Mike Rayfield, talked about the internet of, of things here. Obviously, enabled by, large scale connectivity, getting computing moved out in in ways that it never really has been before. What that means to the to the, a company like Micron, is, frankly, an opportunity to move memory and storage subsystems, in platforms that really have never considered themselves consumers of of of memory before. So you have, And and frankly, this is a this is a space that we're in in really the first inning of, to to put it bluntly, Absa the the client applications. But there's there's a common thread here, which is that you have, a very wide diverse number of enablers, of application processors. And increasingly, they're looking for ways to get memory and storage subsystems, to high performance subsystems, by the way, to their application and so really the things we care about here are ease of use, and and probably a performance characteristic that focuses on power, and battery life, like no other given the, obviously, the the characteristic of the application. So, really in in innovation opportunity where we're looking at the DRAM, the NAND, nor portfolio, even forms of emerging memory and saying, how can we combine these to get the power of the situation down as well as the ease of use, enabled much more quickly. Mobility, you know, this is a biggie and and and I think you you got a good flavor of that, from Mike's presentation. Here, you're talking about what the, you know, frankly the internet of things, if think about the internet of things based platform, mobile, 5, 6, 7 years ago, looked an awful lot like that in terms of the application process or in the memory subsystems around it. You think about it today, these systems are just as concerned out power. But what they're much more concerned about now is actual performance. And, that that is a gold mine, quite honestly, because the keys to better performance lie in better memory systems, better storage systems. So what's happened seeing here, Mike talked a a bit about LP 4. The fact of the matter is you have a prerogative here of how do you much more DRAM and larger, storage subsystems closer to the processor at self, keeping the power down and making sure that, we can deliver a a, a much better computing subsystem at acceptable power. You get to networking. This this is really you know, this is a segment, Tom covered, and this has been a good segment for Micron. The, and and frankly, it represents really the first of the big segments where memory innovation was, not just critical, but the only way in which new systems could bands. And we're seeing that today. Some of what Tom talked about with with HMC. Before that, it was Micron's reduced latency DRAM where there no other way outside of a much more advanced memory, something that was not offered by JETEK, something that required real re innovation to go do. The fact of the matter is the next generation systems cannot be accomplished any other way, except for this kind of innovation. And, we were talking earlier in some of the Q and A about partnerships, the kinds of companies coming to my looking for innovation just simply to go enable and deliver their next generation of high end, net working infrastructure, the big platforms, the control and data plane architectures necessary to get a video moved through, it, the the 100 g standard and soon the 400 g standard, it just can't be done outside of memory, memory innovation. Cloud Computing. So this is this is the, the the broad volume server space today, the data centers, virtualization. What we care about here in the innovation opportunities come down to a prerogative of making sure that not just in the main memory, but also in storage, there is very low latency access to whatever the hot data is. Okay? And that hot data can move around all the time. So there's a there's a prerogative to make sure the late see across every single one of these these, memory and storage hierarchies is is minimized as much as possible. Think about the uh-uh the gap in between memory and hard disk drive that existed initially, a relatively small gap 20 years ago and that widened over time. Obviously, NAND has come in to fill that gap. Here, you have the same kind of gap opening that gives the opportunity for true storage class memory. And I'll I'll talk more about that. But what storage class does, what PCIe drives do, even innovation inside of the the DRAM subsystem is that you have a much better opportunity in virtualized systems, inside of of, you know, truly making sure for certain data center workload that the hot data can can be accessed wherever it's randomly stored. And finally, you get to HPC here, big, big data, yeah, in memory databases, be it for advanced scientific computing or be for fast pattern recognition. This is a space, quite honestly, where where memory in insatiable need for more memory with quick access is driving really probably the most interesting memory subsystem advancements here as well, HMC used slightly differently in a way that makes sure that that data, large amounts of data with very, very low latency can be accessed fast, getting that memory fabric done right, is frankly, it's it's a problem that needs a new memory interface, and that's what HM see some of the technologies we're working on, is enabled to go do. So you look across all of this and and, you know, the question is, really, what are we doing about it? What what all of these have in common is a a in terms of the Silicon participation in these segments, a declining percentage, the silicon penetration going to the actual processor, be it the apps processor or the the high end server processor and more of the silicon going to memory, the storage application. So and and by the way, what's happening by virtue of this innovation is that the value of the memory is not just the number of bits. If you think about the last 20 years of the, the memory industry increasing density you know, generally standards where the interfaces made some advancement. But those interface standards were pushed by JET X that function as a a little bit of the de facto marketing operation. What you have now is that the innovation, the value is in the interface itself. Okay? And it's not just the interface. It's the the form factor. The, you know, the particular standard with which we talk to the memory, how we think about the intelligence to the memory, how the memory manages itself. So a world of, a world of opportunity here pulling it back around to what we're doing about that 33 technologies here. These are in various stages of development today, development and enabling, frankly. And I will tell you that, you know, these are these are big technologies that require a lot of R and D and product investment. Three technologies I'm gonna show you today are pretty good proxies. I'm not gonna show you all the technologies we're working on if it gives you a good flavor, of where we're putting our bets and where memory can do the most good. First of all, in package memory, Mike spoke about it briefly, You know, the fact of the matter is beyond LP 4 in mobile, beyond, DDR 4 with some expansions inside high performance computing machines, the data pipe is the problem. There's no good way to increase the speed of that pipe if the layer of memory is sitting outside of the processor package itself. So in package memory, in one form or another, is getting worked on through partnerships, through the kinds of technologies that Micron can bring to bear, with a prerogative of making sure that we get the memory closer to the processing itself. And that's something you're gonna see more and more of. That's that's a big, architectural in here. That's enabled through things like through Silicon Vias, which we're in production with. It's also enabled by, advanced packaging to make sure that we can do, you know, connections on the order of a 1000, signaling lanes in between the memory and the processor itself. So think of in packaged memory as really the the next innovation step along the line. I like to think of it really as what embedded DRAM, was talked about delivering, 15 years ago, for instance, we went through the embedded DRAM phase, and the fact of the matter is combining logic large amounts of DRAM on one piece of silicon, you know, was talked about as a as a big architectural trend for a whole bunch of and manufacturing reasons. And outside of a few key applications, it didn't make a lot of sense. But what you can do through advanced package technologies, again, to get the memory closer to the processor, is a pretty powerful thing and in package memory does that. 2nd is getting the processing to a processor here, this is one way that you can do that, getting the processing much closer to the memory itself. What we're doing here is pattern recognition, in a specialized memory architecture that ensures you get the the bandwidth value of memory, the real parallel as, of memory to do advanced pattern recognition on binary data, real time data. It may be network traffic, looking for, virus patterns, it may be bioinformatics where the fact of the matter is, the kinds of searching, the kinds of graph searching you need to do is best done inside of a large DRAM array, getting the patch recognition happening at the lowest granular level possible. So a pretty powerful technology and in our mind of where the big HPC space is headed ultimately getting processing dispersed in the lower units and out closer to the memory itself. Finally, store class memory. And, again, here, Scott touched on this briefly, we are, looking hard in a, what I would say, relatively advanced development on on, more than one technology inside of the company that probably the best way to think about it is in between DRAM and NAND looking for various better ways to slice the memory problem. Okay? And storage class memory is a general term, but what it really says is something that fills that gap. So again, there was a, if you think about the historical gap in between, 20 years ago, between DRAM and disc, relatively small 20 years ago, that grows over time, NAND comes in and fills that gap. Storage class memory is the way in the future, memory systems will look filling the gap that's increasing now between DRAM and NAND. There's various ways to think about this in some ways. You can think of this as as the, much larger DRAM systems that are made to be persistent so that you don't worry about refreshes. Other ways to think about it is that you take storage systems like NAND and you move them to this form of memory at slightly higher cost, but in ways that are faster, latency, ways that you can get, access to the bits in significantly faster time, this is the way that you go and re architect that subsystem. So these are the kinds of advancements that as we think about the 4 business units that we have, this this real continuum of of opportunities out there. Number 1, getting the the memory closer to the processor. Number 2, getting the processing closer to the memory. And number 3, rearchitect protecting this memory hierarchy, it gives us an opportunity like, like, nothing we've, frankly, we've ever seen before. I I think of it in terms of the 3rd phase of the memory industry, the 1st phase, all about fast advancement and densities, JEDEC driven interfaces, the micron of of of 10, 15, 20 years ago, you think about the Micron of today, really in the 2nd phase where we've broadened the portfolio through a lot of the early innovation work we've done on LDRAM, and and, obviously, the mobile portfolio today, this 3rd phase is where all of a sudden memory is the real solution to this level of of computing, challenges that that now exist. Finally, you know, in terms of the model, to pull this off. Without a doubt, these kinds of innovations, these are, you know, these are big, expensive endeavors, and frankly, it takes a end to end machine to get this done right. So, really going all the way from technology to the end application, you need to be thinking about all of the pieces necessary to do this inside of an operation and do it tightly, working left right here, really on the technology side, with with Scott's teams focused not just on on next generation DRAM and NAND technology, but now thinking about the packaging technology necessary for better form factors, how do we get these technologies combined together in a way, to get the the well as possible form factor side through Silicon Vias, for instance, the emerging technologies of storage class memory. Then you get the solutions and engineering side trying to take those core technologies and turn them into real solutions driven by the customers. Again, we we've spoke earlier today, not just that component design anymore, but the fact of the matter is a lot of firmware. We got questions last if the if the deal we announced changes how we think about our own controller and firmware teams. And and the simple answer is it doesn't change it at all for simple reason that with the next generation of DRAM, even today's generations of DRAM and NAND based products, we have an insatiable need for management IP that you do through firmware. Getting the SaaS collaboration in place was a great way to say say, look, here's one, that we're going to co share. But the fact of the matter is you think about mobile, you think about PCIE think about all of the interfaces coming along behind us and the management of memory, it needs a lot of resources. This is probably frankly our biggest area of hiring right firmware resources to manage the memory and get the interfaces done right. 3rd column, the business units, we think this is a pretty good framework. You heard from all of them today. A good way to make sure that we're bridging the gap in between the core technologies all the way out to the customer solutions, in and then finally to that point, on the customer side of things, a much, much more diverse landscape than it's ever been. Darren spoke briefly about the direct to business sales force. That's a pretty valuable channel, not just for the in sales, faster forecasting, but frankly, to understand what the customer's problems are and making sure that we have earlier access to those kinds of insights. That's what informs the the innovation machine. It all really coalesces to make sure that you've got a virtuous cycle here from front to back that truly solves real world application issues, done, in world class time to market. So we like the model There's a lot of scale benefits here. There's a lot of benefits in terms of the OpEx. We have all of the technology we need internally to do this. We're not afraid to partner where where necessary, but we think it's a pretty, pretty good model that that has shown the ability to innovate and gives us the best chance forward, to be the innovation leaders in the memory field. So with that, we'll we'll open it up to, q and a and, get Mark and Mark back up here as well. Yeah, Monica. Thanks for the presentation. Could you maybe talk about the margin profile of different segments of both DRAM NAND? And is it the right margin profile for you and kind of you're thinking to change the business towards a different margin performance? Yeah. Great. Great question. You know, there's, probably the best way to think about it is, is of of course, there's there's a, you know, a diverse margin profile, top to bottom across various subsegments. Probably more interesting is that there's evolution over time with what that margin profile looks like. And really, the the margin profile that we push for is make sure that, we are getting our memory pushed to where it can do the most good, where it's valued the most highly. And by virtue of of these kinds of problems, these kinds of innovations, we have seen that margin profile note change. You know, DRAM exhibits a good profile of that today. And as you heard from from both Darren and Mike, working on penetration in the value added sub segments in NAND. And those those uh-uh, you know, the the the higher margin opportunities there are are really obvious, enter price computing, for instance, enterprise storage, as well as, the next generation of EMCPs in mobile, we believe gives us the best opportunity in nonvolatile subsystems to really move the margin lever. And then the next generation of products is a form by these kinds of challenges, starts to, starts to give the next the next level. But what kind of of portfolio, this kind of challenge out there, these are all much stickier opportunities, much more custom, much more focused on working with smaller numbers of customers to solve their problems in a direct way, that, you know, not not commoditized at So so we we like the model quite a bit. Yeah. Yeah, Haymer. Hi. Thanks. Could you talk a little bit more about the automotive processor. Yeah. Yeah. Seems very interesting. Like to kinda understand I I I believe it's more like a co or like a process accelerator so it doesn't exactly replace processes. Could you talk a little bit about what timeline is and Sure. Do partner with anybody, in this in this area? Yeah. Great. Great question. So so, Tomathon, this is really representative of of what these kinds of architecture overhauls, look like. Probably the best way to think about it is as a co processor. So the the the first instantiation of this will sit on a PC card, okay, inside of a, both networking boxes as well as high performance computing servers through PCIE card, it's addressed with traffic that's directed towards by by the main system process. Directing the traffic to what this pattern recognition engine does very, very well. And and as you know, as we've described in the past, what that is is really, taking a standard computing instruction set and somewhat turning it on its head. It's it's a way of saying, look, rather than taking this one thread of computing traffic and and going through line by line of code, as fast as possible it can do, element elemental instruction is what we call it, and or X or is this kind of thing. And it applies it to the to an entire array of data in only a way that DRAM can do. It chunks through the data page by page in real time every 10 nanoseconds, you will get an answer out of this machine whether you like it or not. That's that's a level of parallelism that no other processor can do. But to your point, to feed that processor well, you would put this as a co processor inside of a machine and direct the traffic geared for that kind of a pattern recognition system to it. Now to enable this, we we readily admit this is this is not a standard architecture. As a matter of fact, we put probably just as much effort, as we put into the silicon into something called the software development kit, which is really a way of saying the kinds of tools necessary to take a given problem from the networking space, from an HBC space, and make sure that that a given application can easily code up their problem, a way that it can be solved by automaton. And frankly, you know, again, this this kind of a, this kind of a a platform, it relatively may sent today. So this this has a long enabling time in front of it, but the level of interest from the networking community from the HPC community, and from areas as diverse as bioinformatics, is has been unprecedented. Don't think about this as 2015 revenue. I wanted to give you a really a picture of some of the things that we've been developing, but this is in in, sampling form today. Okay. Good. With that, we're gonna go ahead and turn it to Mark Durkin for a wrap up. Thank you very much. Alright. Thank you, Brian, and, thank you, everyone, again, for coming today. I've got, just a few a few quick slides to to touch on a few topics we haven't covered yet. And then, I'll ask Mark to come up and and do a little Q and A with me. But Let me let me start by saying you've heard a lot already today about the things on the on the right of this page. Which are the things that drive our business. You've heard about the importance of products, market segments, markets, developing the right customers, having the right folks go and talk to them. And the investments we're making in all of those things. And you've heard about, technology and the importance of to drive those products and drive those future markets and the importance of investing in our business, so that we have the right mix of manufacturing capacity out there to be competitive and to deliver those products out into the future. And you've heard about solutions and all the things that we're doing for solutions. And I'll come back to to both the capacity and the partnerships here in just a minute. But those are those are all the things that drive business, but we don't lose sight of the fact that this is a business. And that that you guys are all here, be because you're interested in how successful our business is on your behalf. And so on the left here, I wanna I wanna just give you a a brief view of how you can look at our business. 1, of course, is in in metrics like how are we doing from a revenue growth perspective, etcetera, or gross margin. But I think the the the more important metric, for you guys to focus on for a company like Micron that has a fair amount of complexity in its overall corporate structure is, you know, what is the return on assets? What's the return invested capital that we're delivering on your on your behalf. And I apologize for all the extra numbers because the only one I really want you to look at or think about is the the non GAAP number at the bottom there, which is which is the one I think is is really the most relevant in terms of thinking about our money. And so, you know, we we we continue to see improvement. I don't know that that will improve forever because we do need to make investments in our business, to grow into the future and we have great growth opportunities. But we continue to make improvement. And really the return, when you think about, what this company is doing by leveraging the partnerships and by acquiring, assets at the right time and the right way, and and, delivering the right products, the markets and customers, I think is pretty phenomenal. And I'll come back to that here in just a minute. I want to talk a little bit about how we're making progress about on our on our capital management. But before I do that, I thought it's worth just quickly reviewing a slide we've shown you before, which is which is, you know, what are the high level metrics we're thinking about for the management of this of this company's, capital. Obviously, we want that ROA to be significantly in in excess of our of our cost of capital 1, that that's currently running about 10%. And we continue to make make progress on our on our cost of capital. We'll come back around in a second. We said we wanna maintain balance sheet. This company has great opportunities out in the future. In the past, having a strong balance sheet has been important to us for a number of reasons. Not least of which is it's given us the off to go take advantage of opportunities in the marketplace when those opportunities present themselves. And we wanna make sure that we don't lose that quiver, that that arrow in our quiver, so to speak, from strategy perspective. We're the way we're thinking about our minimum cash balance is this, last 12 months SG and A plus R and D plus current debt that number's, currently calculates about 3,400,000,000, and I'll show you how we're doing relative to that. Yeah, wanna continue to to, make sure we we have access to low cost capital. And, we wanna make sure we keep our leverage ratio in a in a reasonable range and and, and we've defined that for you guys as 1 a half x. And finally, this is a this is a business that has real growth opportunity that that we think is is very, very exciting, and it is worth in testing in. But having said that, we also believe that over time, this business, while it will year in and year out as we think about the greenfield capacity and and various technology transitions, etcetera, etcetera. We think this this business will become less capital intensive, and, we we think we need to exercise discipline relative to to how we invest and when we invest. And so we have the targets relative to CapEx of sales over the long term. So how are we progressing? As I just showed you, last 12 months, non GAAP ROA, 26% is well in excess of of our cost of capital. Our capital expenditures over the last 12 months been about 19%. So we're in that range. When we think about the the target capital structure, we continue to make progress there. We did just do a new offering a number of weeks ago, raised a $1,000,000,000 worth of cash, which is now on our balance sheet. So you can see the the cash on the balance sheet is is a significantly more that minimum balance. However, I think we've done a a a reasonable job of articulating for you, some of the uses of that, on a go forward basis relative to to dilution management as well as other corporate objectives. And finally, we have seen an an upgrade in our debt structure in our, sorry, credit rating over over the last number of months, and we believe that's a positive indication doing the right things, for the capital structure. Finally, many of you are also interested in what are we doing, to make sure we're we're enhancing the value of the shares of the company. I think the management team is also very, very interested in that and works with that aim in mind. Over the last 15 months, we've returned $2,800,000,000 worth of of of of cash, to shareholders through convertible note repurchases. We announced a couple of months ago a $1,000,000,000 stock purchase, had been authorized by our board. And while we have limited windows, we wanna be able to use that opportunistically. And while lives, you know, limited windows when we have the opportunity to to to act opportunistically. We have so far executed roughly $200,000,000 worth of purchase. And we've repurchased 6,500,000 shares plus or minus in that $6,000,000 to $7,000,000 range. I don't wanna be too specific yet, although we'll certainly report that in of our stock. And so that will, you know, continue on, into the future as we see the the right opportunities in the market replaced. So in in in net, we've reduced, 111,000,000 shares. Which is approximately 9% through dilution management for the recent time time horizon. And, humbly, I will tell you that I think we're doing an okay job, overall in aggregate as we think about how we balance the money we're using to to repurchase, shares and manage dilution in the company. We announced on the partnerships. You know, we've got a we've got a number of very significant partnerships in the business. One of them is in Oterra. We've been, partners with those folks for a a number of years now. And we we recently announced that we are or have agreed to to restructure what the supply agreement, with with Innovero looks like. The one board. I think it's worth backing up a little bit and thinking about, the fact that really in our business we we want to have enduring partnerships, although the the reasons for partnerships can change over time and what what they're intended to accomplish can change over time. We wanna make sure we have partnerships that are useful for the company and that that can, endure and change we move through different periods. So I think it's worth backing up and thinking about the end of 2012 when, Micron was in the middle of sponsorship, of LPDA and was preparing to bring all that new capacity into the company and sell it out in the marketplace. And what, at the time, was a pretty weak market. And when Mania was struggling in its own business and wanting Micron to take more capacity, risk and take responsibility for more of the capacity coming out of Venator and sell that into the marketplace. We had a we we we saw a great you in that capacity, but we also, knew that we needed to have, risk mitigation, against against down markets. And so the the agreement we put in place at that time with with with in Oterra was that, yes, We will take all the output. But we need a we need a pricing mechanism that protects us. It gives us more risk on the downside a in a difficult market because we're also, you know, trying to buy LP'd out of bankruptcy. And so what that market relationship looked like was a market minus that really looked more like an annuity to Micron. It was it was it was gonna be, always positive in terms of the we generated, but it was gonna it was gonna limit our upside, somewhat, and that was okay with us, by the way, because in a good market we knew that that for Innoterra to be useful as a partner for Micron over the long haul, they needed to repair its balance sheet and generate the cash to invest in 20 nanometer going forward. And so that was the that's that's the current relationship, just prior to the new relationship we announced. The new relationship we announced is is also a function of what the market conditions look like, but it's now it's a margin sharing agreement, which means Micron takes more risk in a in a poor market, and we make more money in a good market. And that's, as a believer in this business and and all the things we told you today, we think be a pretty good business. And so this is a deal that, we think is right for the time is a is a more equitable distribution of the rewards of of the business that we're helping create with our technology and with, our product portfolio. And while we take more risk, we think, that the ability to make more money, if we execute well from a technology perspective from a product portfolio perspective is reasonable. And, you know, the folks at Enercare other smart folks. They understand the value of the Micron relationship. This really is, in my mind, a win win. They get to keep Micron as a partner long firm with with a good, a set of, technologies coming downstream, good set of products coming downstream and really a a a stable operating environment out into the future. Now there's been a, you know, I think there's a a a little confusion okay. So how do I model this? What does this really actually mean? And, the answer is it depends on how we execute, how good our product portfolio is, how in Oterra does in technology deployment and what the market looks like. Because at the end of the day, the better, you know, the the better the margin in the business, the more we're gonna have and the higher that's gonna be relative to the relatively slope versus market conditions that we had in the previous agreement. I will tell you that, a reasonable place to model this, for now in terms of incremental cash margin to my Chron, in the 2016 year is is probably in the 8 to 10% range. It could be lower. It could be substantially higher. Incremental cash margin to Micron under this agreement than under the past agreement. But the biggest single lever is, what do you think thing's gonna be, in 2016. And to get to that sort of 8% 8 to 10% range that I just talked about, really the underlying assumption would be a good execution on the technology deployment because I think that's likely to happen. And, maybe a somewhat pessimistic view of what DRAM pricing is gonna be in 2016, because at the end of the day, the number is is much, much bigger if we have flat pricing, and I think that's a an that's an outcome that could happen as well. So, if there are additional questions on that, I can I can answer some additional little bit longer a little bit further on? Singapore, we talked about a lot about 3 d today. I, you know, I I just wanna make make one point, pretty clear because we we kinda talked around the issue a lot. You know, when we have 3 d NAND, technology deployed, in our manufacturing fab, it's gonna be a significant cost improvement over what we would otherwise be producing. And it puts us on a roadmap that drives significant, manufacturing efficiency on a go forward basis, what could be relative to what could be achievable, with plain and NAND. It also drives much better performance, like like, Scott and Darren both talked about. But it really is, long term, where we need to be, and that's why we made the decision to invest in this early. And and we feel really very, very bullish about our technology position here. That's why we're interested in making sure we have the the plant in place, to make the investments to transition, our planar capacity in Singapore over time to 3 d Now to kind of frame up what that looks like a little bit, if you think about the the capacity in Singapore today, it's about 40,000 wafer starts per month. That's in this fab 10, but it's it also includes some incremental space in the old tech Semiconductor. Fab in Singapore. If we were to transition all the NAND capacity in Singapore, which is which is our intent, I believe, over time, think I think we will leave some capacity behind on planar for a period of time, to support legacy applications and lower densities that aren't sense, driven. But I don't think that's a a big piece of our overall capacity. You know, our our our overall NAND capacity is somewhere closer to to 250,000 wafer starts per month, but But this 140 that's sitting there in Singapore, I'm pretty confident over time we're gonna transition all that to 3 d NAND. And if if we go through that process and we convert to our gen 1, which is 32 tier, and then we convert to the 2nd generation some point while we're ramping this fab space. Eventually, this the the whole Singapore island will be 3 d NAND gen 2. And the the bit efficiencies, that will be driven as we do that are that we'll have roughly the same number of wafers coming out with addition of this incremental space. But we will be able to drive 40 to 50 percent annual bit growth over an extended period of time, within this platform. And probably beyond that that gen 2, three d as well because the pointed out, once you're on that trajectory, now you can add, more tiers and and scale and and great, you know, a pretty good ongoing growth profile. So when we do that, we're still gonna look at the markets. We're still going to modulate investment based on on how the technology is progressing, what the customer demands are, what we see as the return on invested capital at any given point in time. And in some years, it might be more than this. In some years, it might be less. But over time, we can certainly do that, and that comp to a pretty, pretty decent number that'll, you know, I think, be beneficial for Micron's shareholders. We undergrew the market, a little bit this year, or we will undergo the market a little bit this year. We overgrew the market a little bit the year before when we took tech and converted it to NAND Flash. So we will, as we move through time in the NAND space and in the DRAM space, sometimes oversupply, sometimes under live. But in aggregate, you mentioned is to continue to to to to grow, with the market in both in both areas. On the DRAM side, got a picture here of MMT, which is the old WEX chip fan on the left and the schematic of Innoterra, on the right. The pink regions are empty fab shell space. So I'm showing you the existing 4, footprint in in existing, actually, locations where we could add incremental weight if we wanted to. So the natural tendency, as we've talked about, is as we migrate technology, the process complexity goes Going from 25 nanometer to 20, we're losing between 15 20 percent of the wafers in some fabs. This this space in these two fabs represents roughly a, a 25% increment, to the existing DRAM base worldwide. And so that gives us the ability to kinda, keep up with with market growth should we choose to. This year, as we've said, we're very focused on technology enablement. We're we're not in increasing wafers, and so we're growing a little bit slower from a bit perspective in the market. That doesn't mean, that's long term plan is to to maintain share. And, yes, we wanna migrate our capacity to more value added segments, and that doesn't necessarily always mean, that we're focused on bits. We're not. We're focused on on on knowing our business and doing what's what's appropriate at the right time based on on ROIC. And the way we like to set up our fabs, as we talked about in NAND in Singapore. And here in Taiwan with DRAM, so we wanna be able to add capacity gradually and incrementally based on market positions. So in all these fabs, we've already got manufacturing scale. We can add, to market demand without, the market or without having huge surges, in in wafer supply that that are necessary to reach manufacturing efficient levels. So not gonna read you all this stuff, but, hopefully, we've we've convinced you, that we've got a pretty good business here and that we're running it well and we've got very bright future. We're all very focused on that. We're we're focused on our markets, our customers, our technology, our people, and we're focused on the business. And making sure that we make the right decisions as we think about capital allocation for the future. So, Mark Adams, if you wanna come up, I'm gonna pitch all the hard questions to you, and I'll take the easy ones. Over here. Thanks. I've questions. This is Roman Shaw from Nomura Securities. First, can you guys talk about, just your expectation for industry margins where you think it's gonna go. Today, when I look at the numbers, it looks like Sandus margins or product margins are running in the to mid 40% range. Samsung and Hynix are in the thirties, and you guys are a bit below that. But a lot's changing. You guys are forming new partnerships. There's a big technology transition in front of us. And then it seems like mix has up in the air. So how are you thinking about NAND industry margins and and improving, I guess, that spread between being you and and and the other players in the space. That's my first question. Okay. So there's a lot in that question. Let me let me with, you know, I think I think the NAND market will be volatile over time. And I and I don't, you know, I I certainly don't wanna get in the game of predicting what industry are gonna be, for our business going forward, but particularly, for the NAND business, because we have, as you pointed out, significant technology transitions going on. And when that happens, it's tough to, understand exactly how technology is gonna be received in the marketplace, exactly what the yields are gonna be when you ramp new capacity, and, exactly what the elasticity of demand, etcetera, is. And so, you know, I think it'll be tough to keep supply and demand, match throughout through time as we as we deploy these advanced technologies. And I it'll be a good business. I think margins will be be good because I think there's a lot of demand, and I think the that, that there is, a disciplined, approach to how people are are are driving this technology least, as is visible so far. So I think that's part 1. Part 2, how is Micron improving its NAND margins through time. Well, we've, you know, we've talked about, the importance of of, EMCPs and moving more of our NAND into the mow into the mobile space. We've talked about the importance of of having a better product portfolio on the client SSD space, including, TLC, NAND is a piece of that portfolio. We've talked about enterprise and and some of the benefits we expect to get through the Seagate relationship as well as other price solutions we're working on internally. And we've talked about 3 d NAND and it being a significant, what we believe is a significant, technology differentiation for Micron and one that we feel, very, very positively about. So we got a lot of good things going. Think that that over time, that will close the gap. Now you said something specifically about Sandisk. You know, I think Sandisk is a little bit unique in that they have a significantly higher operating expense than than some of the other competitors in the marketplace. A lot of that's driven by the fact that they're in And that market, I think, has been generating very positive margins, but it's grow but it's starting to slow a little bit. So you know, where they go, where we go, you know, I think that's that's tough to predict, but I I will, very confidently predict that you'll see a narrowing in the operating margin between Microns and NAND business and all of our competitors on a on a go forward basis. Okay. That's helpful. My second question is I'm still trying to get my arms around the new interterra agreement and said, I think, as a baseline, assume 8 to 10% incremental margin, but it's price dependent. And I was wondering if you could just give us of scenarios in terms of pricing and what the impact would be to your, your incremental margins. Thank you. Yeah. So, you know, at, at, at flat pricing, you should probably more than double that. And, at pricing, roughly a third of what it is weighted average in the marketplace today. Would still be making money, but it wouldn't be as good as the current agreement. Kind of bound it for you a little bit. Just kind of the, extension of the same cushion. When you said 8 to percent extra margin. That is Inutara's extra margin. No. This this extra cash margin to Micron. Okay. You know, I I I haven't I didn't into the Innoterra call or what they had to say. My understanding is they talked about it at that they believe their margins will be 5% lower in 2016. You know, I think they have some margin in in in in improvement in their business may be built into those numbers, or they may have different assumptions as to what they think the market's gonna look like in in 2016. I wouldn't wanna try and that gap. That's a relatively small difference in what could be a relatively large, spread of outcomes. Justification on the NAND, NAND market, NAND, if you look at the NAND bit too, that's been between 35 to 40%, 38 to 40% over the last couple of years, which we have seen kind of over the find the market. And all the NAND vendors are still talking about 35, 40% market growth going forward. And you just talked about maybe with extension on Singapore, you could get 14, 50%. So is it possible that we could be in this oversupply mode in the NAND for a long time? And maybe the supply growth needs for the NAND is really not high thirties. It's probably much lower than that, maybe low 30%. Okay. So so a few things. Anything's possible. I think that's an unlikely occurrence in the NAND market because I I think there is real significant elasticity of demand. And I think that that while things may get out of balance from time that company these, in the marketplace today will behave as they have been behaving over the last couple of years, which is with a with a a focus on return on invested capital as opposed to market share where we have you know, 5 incumbents with with really critical mass and significant market share. There was another part to the question, though. Maybe the supply growth is Oh, oh, yeah. Yeah. So so I didn't say we're gonna grow capacity 40 to 50%. I said that there's for a very extended time frame. The implication being, I don't think we need to add any any new cleanroom space unless we think that there's a different demand curve. Going forward after we put this piece in place. Now we might grow more than that in 1 year. We might go over less than that in 1 year. That that's a decision we're kinda gonna make as we as we go but I don't have any any reason to believe that that as an industry, the incumbents would go out and oversupply the marketplace for an extended period of time? You know, in fact, that was one of the most compelling reasons to do Singapore because that gave us the ability to modulate. If you go to another greenfield somewhere else. It would have been a little bit more difficult to get the right scale and the economics to, to run a run a profitable business that way. Singapore allows us the ability to add and leverage the administration, the operations in place and scale appropriately to the market demand. Two questions here. Going back to the Seagate arrangement, it seems to me there's more into it. It's definitely more than just getting some forecast on SaaS demand. Can you elaborate how this whole, this this this arrangement or alliance came about, see has historically had relationship with Samsung, and now you're coming in. And what are the risk or reward here. And again, I'll go back to it's it's gotta be more than just getting a forecast on on NAND demand for SaaS application, and I have a follow-up. Okay. So let me let me answer, and maybe Mark wants to add something. You know, I don't think I wanna speak for Samsung or Seagate about about about that relationship. All I all I can say is, apparently, it wasn't working out to either party satisfaction because I don't think that they've been doing a whole lot together recently. So, you know, I think I we've done a few relationships. I think we understand that for relationships to be successful, both parties, have to have to something they want and need and that it has to be cooperative and collaborative relationship over time. There is a lot of stuff in that cooperation and collaboration that's going on here. As Darren talked about, you know, there's a lot of, collaboration going on with, knowledge about how the NAND works and how what what knobs to turn in the firmware, and and how to think about controller functionality on a go forward basis to make sure whole thing can be optimized on a go forward basis. There is a right to supply that that Seagate needs in order to grow its business. There is a second source in the marketplace that customers want, to feel comfortable, that that that they can buy these drives in in large volumes. And, you know, there is, access into the enterprise market for Micron, with a with a drive and a and a partner, that has a has a history and reputation for delivering quality drives and and and processes to support them. So So, you know, there's a lot of things around there that make a pretty compelling package that that create a lot of extra value and and maybe you know, a significant share in a in a profitable and rapidly growing market segment that, you know, I think creates a pie that that is big in it's pretty exciting for Seagate and it's pretty exciting for Micron. Sure. And then follow-up has more to do with the near term business, during the November conference call. We talked about stabilization in NAND prices. The DRAM came in a little bit worse than expected. How do you see market, that has evolved since and do you see, the inventory that could be a factor after Chinese New Year holidays? Yeah. So I'm going to let Mark talk about market conditions right now, but I'm not sure I understood the first part of your question was relative to something somebody said when? Going back to the the November earnings conference call where I think Mark suggested that NAND prices have actually started to buy them out. Yeah. Yeah. And then the DRAM prices were a little bit of slightly weaker, but then it seems like even DRAM has bottom. So maybe if you could provide some updates And how you see inventories at a current environment being a factor post Chinese New Year holiday So, Mark, you wanna Sure. I think, how we look at it is that, the inventory situation could be could be, a combination of So one of which is when when competitors or or companies are actually making transitions and they have lower yield product or lower spec product, what have you that hit the market price and have a short term, impact on market condition. We're not of the opinion that inventory is way out WACC post holiday. And and quite honestly, we think that, seasonality wise, this is somewhat to be expected argued about timing and what have you. And we we're not in the business of commenting on that, but but we're not overly concerned about the directional signals fairly as a as a as a trend to go on forward. We're not commenting on that. We do think that this is more reality in play and potentially some, intermediate, supply that hit the market for technology ramps and processes. And we feel pretty good. You know, it's interesting. You're talking about, you know, kind of one part of the market, and we're seeing really strong in demand signals in other part of the market, which was my earlier comment, some of our segments are are still very much in constrained mode, and some of our segments are experiencing the dynamic you're talking about today. But overall, still pretty good. Excellent. Places for both Nan and DRAMOS kinda stable. Yeah. We're not here to update our guidance that we did give. This is Shrinivas Summit, Hamblatt And Company. Yep. My first question is, what are the gross margin improvements that you might say the investment community is not appreciating at this point. What are the initiatives that you're that you're doing those margins. And NAND or DRAM or both? Or okay. So I I think I already addressed the the some of the things that are going on in NAND. And they're, you know, they're they're positive trends across a number of different segments. And, probably enough set on that other than I think it'll be you'll you'll notice it as we move forward quarter by quarter, and I think it'll be significant by the end of 2016. You'll take a and you go, hey. That's that's that's pretty good. You know, DRAM, obviously, we just talked about about the interferral relationship and how that might play out or not play out. Additionally, you know, there's nothing like a well executed, technology transition, to help drive margin in a manufacturing business. Having said that, you, you know, be be cognizant of the fact that as we spend capital, it takes a while for the for the capital to be installed, qualified, process qualified wafers loaded, run through the fab, ramped up, etcetera, etcetera. So these things don't happen instantaneously, but I think over the long haul, as I've said for a long time now, closing that technology gap on the DRAM side so that our our weighted average, deployed manufacturing capacity see, is is more similar to that of the, the other 2 DRAM manufacturers. I think we'll be we'll be force driving gross margin. And as a quick follow-up, how can you give me confidence that, your NAND plan is going to come to fruition. Yeah. I I talked to a few cable last night at the dinner. If you go back and track what we articulated as our recovery plan, two quarters ago. The elements are the same. And in fact, what you heard today was, our mobile NAND business shaping up pretty well. And we heard today is that TC enablement are mean and milestones. We had articulate in the past. We're quite excited about not just the announcement today that, was in Darren's business to the progress we're making, on pcie and and enterprise storage. And and Scott's section today. So a lot of the element thing that we talked about half a year ago about what the recovery would take are actually playing out as we predicted and as we called. And so we're we're extremely confident about the recovery. Mark, a couple of questions. First, your comments on Entera will help hole, but for those of us that are still modeling challenged, how would fiscal 14 have looked if the Unotero agreement were, in place for the fiscal year 14. How much more accretion to the model would there have been? Oh, yeah. Haven't run those numbers, John, but, it would have been significantly more than the number I mentioned. That that's helpful. And then as a follow-up on the b The the other thing to keep in mind here is, is supply, right, and what is the total bit output of in Oterra in 2016 on 20 nanometer versus what was it in, in, you know, 2014 on 30 nanometer. So, you know, you've got more bits you're earning margin on, what's going to happen with ASPs. It's a very a dynamic model. Got it. And then my second question is just on the DRAM market in general. Last year, Samsung grew bits a lot faster than unexpected coming into the year and they gained market share. I'm kind of curious from your perspective, is there a market share threshold that you don't wanna fall below in DRAM. And then maybe as you addressed that question, there was an interesting, one liner one of Scott's slide that said that the 1 x transition was a greater than 30% cost reduction versus 20 nanometer. Which seemed a lot more than I would have expected. Is there something going on with 1 x that it's gonna be a very efficient node for Yeah. Micron or the industry? Yeah. I I think it's I think it's actually more about Micron internal dynamics than anything else, in terms of how the cost production going from 30 to 20 to to to 1 x, how that all plays out. Sorry. The first part again. Oh, yeah. Marcus. Yeah. You know, I wanted to make the point today that we're not in we're not real interested in giving up share over the long haul. We've got a good business. We, we've got customers that wanna do business with Micron, and while my my primary filter continues to be return on invested capital. You know, I don't want, this business to go into decline as I try and drive higher and higher, ROICs. So that's a little bit of a balancing act. One thing you should notice about Micron is we've done a pretty good job, maintaining revenue share. And that's really how we think about the business. It's more about revenue than bits. But having said that, I think this company needs to have a have have the the critical mass to be significant, to to the significant customers out there in the marketplace. And and that requires having, you know, some significant bit market share as well. And, you know, we'll we'll we'll look at all those factors through time to make the right decisions. I guess a a little follow on that because that was pretty much going to be part of my question was your bit share versus revenue share. Can you maybe give us a little bit of what's going on underneath the covers of that? Because you have maybe needed several points of bit share in DRAM in the last couple of years, but yet I I don't hear you talking about what's really on on that's enabled you to perform so well on the revenue share side. Okay. So so for let's let's talk about bits first. In in in, in 2012 2013, we took tech offline, converted it from DRAM to NAND. And we did that for a number of of pretty good reasons, I think. That that meant that we grew, less than the market in in that time frame. And this year, we're very, very focused in in 2014. This year, we're we're We're, you know, we're we're very focused on, on technology transitions. Right? We need to get we we've got a certain amount of capital. We're allocating it to a lot of different things that I talked about. And the stuff that we're investing in the business is about closing this gap to drive, margin to the right place and to make sure that the we're seeing the right economics relative to our competitors. And so that's why we're focused on that today, in the future. I think we can take, a different approach. And I wanted you to understand that I have the ability to add fit incrementally in the existing floor space. Now what's what how have we how have we done such such a good job, a revenue basis as we ceded some bit share because of the decisions we made. Well, we we spent a lot of the day talking it. You know, we we spent a lot of the day talking about the progress going on in the mobile business. We spent a lot of the day talking about, you know, how we where we've been fit constrained in servers, and that's a growing market. We can actually grow into that more now that we have, more capacity than we did pre LP We've done some great work, and Jeff Bader's embedded business. The networking businesses is phenomenal. It has more side. So, you know, in all those market segments, where Micron is focusing and working closely with customers to deliver, differentiated value, we're having success. And that's, by the way, another reason why we feel comfortable that we'll solve the NAND, gross margins, gap here over the next number of number of course. Great. And for my follow-up, just a really quick one, CFO search Can you give us progress or a time frame at which you might think that it'll be completed? Thanks. Yeah. You know, I don't I don't wanna set a a time frame. I've talked to a lot of good candidates, and, I've got candidates that I'm comfortable with. I'm still talking to more. You know, it's a it's an important position, and I think the company deserves, or expects that I go out and and do a lot of diligence and bring in, you know, the absolute, best candidate I can. And I'm not, I'm not, I don't feel in a rush. We've got a very, very solid finance team, from treasury to controller to, Asian control to uh-uh, tax and gap report. You know, we're in we're in great shape on the finance team. I wanna get a a a a real superstar for a CFO in do that on the timeline that presents itself. Thank you right of time. K. Alright. Thank you all. Hopefully, we covered, most of the things you wanted to talk about.