Seagate Technology Holdings plc (STX)
NASDAQ: STX · Real-Time Price · USD
579.03
-16.83 (-2.82%)
At close: Apr 28, 2026, 4:00 PM EDT
684.34
+105.31 (18.19%)
Pre-market: Apr 29, 2026, 5:19 AM EDT
← View all transcripts

Analyst Day 2019

Sep 19, 2019

Speaker 1

Please welcome Seagate's Vice President of Investor Relations, Shanye Hudson.

Speaker 2

Great. Good morning, everyone. Let me be the first to welcome you all to Seagate's 2019 Analyst Day. I'd like to start off by thanking those of you who are spending your morning here with us in New York City, including members of our Board and our executive team. I'd also like to thank those of you who are listening in on the webcast.

We sure appreciate your time as well. I have the enviable task of starting off and letting you know that management will be making forward looking statements today. Those statements carry risks and uncertainties, and so we encourage you to learn more about those risks and uncertainties in the SEC filings that we have on the Investors section of our website. We will also be making use of non GAAP financial information, and you can find reconciliations to the closest GAAP financial metrics in the presentation that will be posted later today or later this afternoon. Okay.

With that little bit of housekeeping out of the way, some of you had come up to me earlier and made note that it has been about 4 years, almost to the day since Seagate last held an analyst event. And a little has changed in the world over that time. A lot has changed in the storage industry. And certainly, Seagate has pivoted to take advantage of opportunities that we see ahead and who better than our CEO to talk about those opportunities. So Dave Mosley is going to start the show off today.

He's been with Seagate for over 2 decades. He's been our CEO for nearly 2 years. He has led and Doctor. Mosley has led technology. He's led operations.

He led sales. He knows his industry inside and out and certainly the company inside and out. He's invited members of his leadership team to be here today as well, starting off with Ravi Nayak, who's our CIO. Ravi embodies the voice of our customers and is going to speak to you today about the challenges that CIOs face in managing the ever increasing amounts of data that we have today. He'll be followed by our Technology Officer, John Morris.

So John is another veteran of Seagate of 20 plus years. John has probably had his hand in a lot of the innovations that have been brought to market in the disk drive industry, and he's going to speak to the strength of our technology and the innovations that we believe we'll be bringing to the industry ahead. And last, we have our CFO, Gianluca Romano, the newest member industry. So Gianluca has experience at Micron, at ST, at Mnemonics. He'll speak to our financial strength.

And importantly, he'll talk about how we are creating value for all of our And before I invite Dave to the stage, we've got a video that I think speaks to the scale of Seagate, and it speaks to the pride that our 40,000 plus employees worldwide feel every day when they don a Seagate badge. So thank you.

Speaker 3

From the first hard drives to solutions built for the rise of the cloud and the edge, triumphs of technology drive C8 forward. But algorithms and technology can only do so much. They are not the curators of our times. To best metrics and achievements. Seagate's people have answered this call for over 40 years.

We are inventors

Speaker 4

Good morning, everyone. At Seagate, we are builders. And that pride, I hope, comes through in today. The pride that we have in manufacturing our products and delivering them to our customers and how important they are in the world. That pride really shines through,

Speaker 5

through all

Speaker 4

those employees in our vision, mission and value statements. If you really focus on our values, it starts with that team, including people around the world, suppliers, customers, our own employees, voices from many cultures, many educational backgrounds that have to come together to put together a very complex product. And it's an innovative product, very innovative. I hope you get a feel for that today as our Chief Technology Officer comes up and talks. And we do it in a very sustainable way.

Seagate's been around for 41 years. The next 10 years, I think, are going to be the most exciting because of the way data is growing and pivoting. And I think it's from my perspective, these three values inclusion, innovation, integrity to sustainability are all key to making sure that we stage ourselves right. Thank you for coming. Shaney thanked everyone.

I think already just let me say I appreciate your time listening to our story this morning. It's a strong investment story and it's all predicated upon the growth of data. It has been for the last 40 years and it will be continuing. Data is exploding. It's still going.

It's a process that's actually very interesting. And we'll talk to you today about why we think that's happening, how what the dynamics are and how to think about it and how Seagate thinks about it to make the investments. We have spent a lot of time over the years building platforms. And we believe that we have the right platform now to intercept what's going to happen in the next 10 years. So we're going to walk through the platform strategy.

And the foundation of all that strategy is our storage technology. So we'll give some details on that. And finally, all that has to feed itself, so to speak, by creating shareholder value, making sure we can continue to make those investments in ourselves and allocating capital the right way across our investments and back to shareholders when need be. So in order to set the day, I think we're going to use a lot of words like exabyte and zettabyte and these are things that become a little cliche and sometimes we know what they mean and other people don't. Just for perspective so that you can navigate through all these terms today, I'm going to show a short video before I get started.

So can we cue the video? So the takeaway would be that in 2,002, about 23 exabytes of data were created that year. In the time it takes us to get through this meeting now, 2020, about 23 exabytes will be created in 5 hours. That's how much things have accelerated since then. And we're on the cusp of yet another accelerated period.

And we'll walk you through why we believe that. A good framework another piece of the framework to help understand this is the way we talk about it at Seagate is kind of history of the industry from a storage perspective, very data centric perspective. We won't go into the ancient history, but the ancient history is one of mainframe computers. Important point to take away from mainframes is data was very centralized then. And most of the disk drives that were shipped at that time when Seagate was originally founded, those disk drives were not too far away from other disk drives because of that centralization.

We all everyone in this room lived through the explosion of client server. Client server was a really interesting time. There were other applications that came along during that time, but what we call IT 2.0 was client server, was really an interesting period. It's very palpable for all of us, Notebook computers that needed were needed for people to get onto the Internet for the first time, there was a disk drive in there. Even in the enterprise, CIOs were standing up application specific servers, sometimes over provisioned, underutilized at the time, but there were a lot of disk drives there as well.

That explains client server explains why Seagate has shipped 3,200,000,000 disk drives in its career. It's a huge number. And if you think about our supply chain, how it was honed during client server, our customer relationships, how many interface transitions we went into, how many different applications we went through, fairly profound and important to understand in the context of today. But there's been a pivot starting in about 2,005. Data went from that decentralized model where it was right next to people to a centralized model again.

Now there's still the decentralized pieces, remnants that exist and we'll talk about that. But the mobile cloud started rising. It's still going up actually back to a more centralized model. These breathing modes are natural, I think. There's reasons for them economically.

And we'll talk about that throughout the course of the day. But we call the mobile cloud IT 3.0, okay? And it's roughly where we are right now, still growing. And we'll give you some perspective of that today. But we don't think it's the end.

We actually think there's another decentralized model coming. And there are a lot of reasons for that. And we call that IT 4.0. You can see it already in our data and in our customers. And so we'll talk about those transitions to IT 4.0, which is the rise of the edge, okay?

Everything as you go through this is a natural process. There are a lot of business applications that are changing underneath. We realize that we have to make sure we pivot quickly to serve those application spaces. What does all that how do we forecast demand? So IDC has been doing this study for a few years, the rise of the datasphere.

And in these IDC studies, they've actually amended it once now. The number hasn't really changed that much. They've been targeting 2025. And they say, if you think about the amount of data that will be created in 2025, it's roughly 175 zettabytes. So by 2020, the amount of data created is roughly a third of that.

And these are models. They're not perfect. They're estimates for what's going on in the world. Now most of that data, just like the 23 exabytes we talked about in 2,002, most of that data is going to be thrown away. Most of that data will be thrown away.

And that's actually a profound comment because if you think about it, there are new business models that will arise by gleaning information from that data before it's thrown away, right? But nonetheless, most of the data will be thrown away. From IDC's perspective, some of it will have to be stored. And so there's a growth in storage below it. This is storage of all media types, keep in mind.

And there's a cumulative effect. In other words, say, for example, a disk drive that we ship today may still be in operation in 2025, probably will be, certainly in a data center. But the world will need roughly 17 zettabytes of capacity of various media types in 2025. So data is exploding. We're only storing a fraction of it.

And but the install base, if you will, for caches and different tiers of memory and even long term storage are exploding. All of them are exploding. So before I jump into the future too much, I want to talk about the past because I think it's very important to understand Seagate. We believe that we grew up in client server. We believe we still have a very strong portfolio in client server.

We work this with our customers. That some of those customers that are still around, they're big influential people. And remember the IT 2.03.0, 4.0 moniker is not really it speaks to markets, but it's not really about individual customers. Some of them see these same transitions and they may have to pivot from 2.0 to 3.0 or 2.0 to 4.0 as part of their processes. But our portfolio is very strong there and we're very proud of it.

We're not investing a whole lot in it anymore, but we still have notebook drives that we make and mission critical drives that we make. We'll continue to do so because those customer models will continue for a long time. And we realize we're not the market maker there, right? They're end demand. There's different requirements for some of those products that are still client server products.

There will be for a long, long time. They'll be going through their own transitions, and we will continue to use our brand there. Actually, another point on this is that we make a wide variety of products still for this. Say, for example, SSDs. We make SSDs for these spaces because in client server, we understand those architectures very well.

We were the 1st to ship serial attached ATA drives, SATA. We were the 1st to ship serial attached SCSI drives. We shipped more SaaS drives than anyone else. And there are literally 100 of millions of slots for these products out there in the world, enormous volumes. The slots will be refreshed.

Not everything will be new purchases right now. So from an SSD perspective, we know that market very, very well, and we can serve our customers well, and we are. And we can talk about that a little bit later, and you can ask questions about it as well. Our brand is very powerful in this. We will in some sense, we will run this business as long as the customers are there and needing the products to do what they need to do in the markets.

But we're not investing a whole lot in it. It's an important point though. If you start on the very left hand side of this slide, and you think about the products that we had that we designed for them, there was a wide range of array of products, some as small as almost the tip of my thumb, some that were in notebook computers, others that were big tanks of data, some were very fast 10 ks and 15 ks RPM drives. Here's something interesting about IT 2.0 that people don't think about though. Most of the drives were seldom used.

They weren't working very hard. Some of them were turned off a lot. If you think about a notebook drive, they were turned off. The capacity was not available to the world most of the time. And when it was, it wasn't full.

It's interesting. So back to my point about supply chain, when we built our supply chain, we built it for velocity, because our customers would need that product very quickly to go to market. And we built it for cost, but usually around very few components in the drive, as low cost as we could possibly get to, enormous volumes. What's happened in the last 5 years, we talked about this a little bit in our last earnings our last Analyst Day, and I reviewed that package just to make sure that we're still on the plan. What's happened is something that was predictable is the drives that we're making are now fully utilized 20 fourseven and tend to be full, full from a capacity perspective.

It's interesting. So we can project zettabyte growth and look at various trends at a macro level, but we forget about this utilization factor that really comes in. I said fully utilized. You'll never get to 100%, because we ship product to the field and then it doesn't get integrated right away. So there will always be some little bit of a lag.

But what we're seeing is a very pronounced transition in the drive types and the way they're used. And the other thing that's going on is the heads per drive just keeps increasing. Now what's ahead and what does this mean and what does that mean in the context of the disk drive? You think about a notebook drive, low cost, it was about 1 platter, 2 heads and the lowest sometimes one head and the lowest possible cost you could get to so that you could enable that market space. Those spaces are going away and the volume around those spaces are going away.

But for each one of those, it's not necessarily the one to run replacement. A cloud drive has many disks, 8 disks, 9 disks. It has 2 times that many heads. And so it's not exactly a 1 to 1 unit replacement, but the drive types are changing and the pressure on our components are changing quite a bit as well. Client server was huge.

Most of the components can be pivoted over. At some point, we reach an equilibrium in that pivot and then we hit a growth period. And what we're seeing as a forecast is that the number of heads for DRIVE will continue to increase as the data needs of the world increase. And we'll talk about that throughout the course of the day. That gets us out of the client server days, which peaked in about 2010 or 2011 really from a unit shipment, quite profound question.

It gets us into a new era where the cloud starts rising. Why is the cloud rising? I think everyone knows the answer to this. The cloud is driving some really innovative business models, And we have our own story. So Ravi will come up and talk about the Seagate story a little bit about how we were building our own applications tailor made with everything was on prem.

And we said, hey, there's a cheaper way to do it, and we can probably get better economy of scale, and also we can get better feature sets rather than developing them ourselves if we go to the cloud. The public cloud is growing immensely, still continues to grow. As it grows, it does that partly by aggregating data and it has to reinvest in that data aggregation. And so the numbers that you see on the hyperscale data centers on this slide are an estimate. It was done by Cisco about a year ago, I think.

And I think it's actually if you sit back and think about that, it's pretty profound. Now not all data centers are created equal. Some are more storage intensive, some are more compute intensive depending on the application. Some data centers have linear combinations of all of these things inside many different applications. Not all cloud service providers are identical either.

Some of them are very different from each other. But generally speaking, the growth of the data center is what's fueling the growth of the drives inside. And the ability to get drives at better and better value propositions is very important in this. Data center architectures, again, it's easy to say that there's a one size fits all data center. It's not really true.

Data center architectures, generally speaking, from a storage perspective, are breaking into high performance storage and high capacity storage. That mass capacity storage is growing very quickly. Mass capacity is the highest it's not always the highest number of terabytes per drive, but it's actually the highest growth that we have in the market. So let me talk about high performance and high capacity or high performance and high capacity a little bit. There's different architectures.

Most things that touch the end customer quickly are very compute centric and they require high performance architectures. Time is money there. It's also expensive. And the scale out of that is very expensive. When it becomes data intensive or and it backstops those performance tiers, you need mass capacity.

Generally speaking, the cloud has migrated into these 2 distinct architectures. We'll talk about this technically, but I think it's very important to understand that these are different regimes. Interesting thing is this scales down to even small installations in public cloud now. It scales down to the edge as well, the same concept of breaking the architecture up. So how's it going in our industry?

Well, if we continue to run with the technology that we have today with some optimization and continue to feed the growth such as we know it. And there's still a little bit of shrinkage off the client server happening, frankly. We will see a small increase in the number of exabytes. We have new technology, thankfully. We can answer the call with new technology.

And that will get us about another 6% compound annual growth. So think of that as assisted magnetic recording, which we'll talk about today. That's not even enough. We forecast sorry, the IDC forecast was inside of that 17 zettabytes that the world's going to need online by 2025, a 17% CAGR getting there. So we're going to have to invest in those critical components: heads, disks, other components as well and drive test.

That will have to happen in order to hit the numbers. Yes, it's actually interesting. I think if you step way back and look at it, you say, in some sense, the cloud has been able to grow so economically because we had client server coming down. But at some point, that we reach equilibrium and then we have to continue those investments. We believe we're in that point right now.

So we'll talk about that in the model. So we I think a bigger issue is everything is going to grow. Almost every tier is going to grow. Not trade off of 1 versus another. Every tier that exists in the data center is going to grow.

Okay. In IT 3.0, we believe that we've pivoted nicely over the last 5 years. We have the strongest portfolio. We have the highest capacity disk drive. We have the world's fastest disk drive with dual actuator, which we'll talk about today.

We'll talk about why we did dual actuator, why that's so necessary and why that is a trend that will keep on going in the next in the future. We also have the best feature set, something not very well appreciated out in the world is that every cloud service provider is different. Some of them have many different applications and you need to design to tailor make features for their applications, go through their qualification cycle, make sure that you can latch that in that application. So it's not just as simple as making one drive that fits all applications at all, okay? So I think as I think about 3.0 and 2.0, I'm really happy.

And if that was the end of the story right now, we would probably be talking to you a little bit differently, but it would still be a significant cash flow story. There's something new coming. In the breathing mode of centralization to decentralization data, the cloud will not be enough to keep all the data in the world close enough, cost efficiently enough, private enough for everyone's requirements. As a matter of fact, with the data growth that we'll see in the next 10 years, the rise of the edge, the edge is the thing to watch. It really is.

And let me just talk about a few examples of why that's true. AI, okay? So a lot of people you hear words a lot, AI, machine learning, and then we put it around compute, edge compute. And the compute has CPUs or GPUs or TPUs or whatever you want in it. If you talk if you think about this for a second, if you had that thing in your hands, would you buy it?

Would you buy it? It's interesting because the answer to that question is about data flow and what kind

Speaker 1

of data you're going to

Speaker 4

be processing more than just a one time instance. Now there are one time instances things like microservices. I have a set of data I need to process maybe I'll pay somebody temporarily to go do But if I'm going to make large scale install of any of those things, I need to understand the data lifecycle. And to think that the data will be processed once and then thrown away, I don't think is a very good way to think about it. Maybe for some data, maybe some data will be thrown away before anybody looks at it.

But if you want AI or if you want machine learning to do some learning, that generally implies that you process the data and then can reference back to that some point in the future. A second later, 100 seconds later, a year later, what's the model? The better AI is, the more it needs these pit stops for data all over the place. And moving the data around and doing it efficiently is a big, big deal in the future. So let's just use autonomous vehicles as an example.

It's one that you guys have probably thought about this a lot. Are a lot of arguments about how fast will it take. I actually like it as a proxy for a lot of other kinds of vehicles, service vehicles and things like that. But autonomous vehicles generate 10 terabytes to 50 terabytes of data per day. That's a lot of data.

So when we talk about a 16 terabyte drive, I mean, you literally can put that in your hands, right, and hold it. You think about that amount of data, it's a lot of data per day. Well, intuitively, people say, well, a lot of it will just be thrown away or there'll be some decision made and then the rest of the day it'll be thrown away. You have to be careful with that because maybe some of it will, but I'm sure that there will be other people who find value in that data a day later, a week later, a month later in some applications. Enormous amount of data.

From Seagate's perspective, if the world had autonomous vehicles today, life would be good, okay? And it's interesting because a lot of people talk about that future, but they don't talk about the IT infrastructure that would be required to support that future. It's true in a lot of different applications that people are talking about today. The dreams of the future and then they'll say, yes, big data and IT and the cloud and they'll use words. We actually have to design and build the products.

And so we're thinking about this very quantitatively and seeing a lot of opportunity. The next one is IoT. What is IoT? It's a bunch of sensors out through the world. Doesn't that already exist?

Yes, it does, but it's growing. So the numbers that you saw were 7,000,000,000 connected devices going to 42,000,000,000 connected devices in the next 5 years, remarkable shift. And some people would say that's too small. Now what's a connected device? Yes, that gets into how much data is actually coming from some of these sensors that are out there.

Some sensors don't create a whole lot of data themselves. Other sensors create a ton of data. Cameras and LiDAR units which are being used are improving resolution to make better and better decisions all the time. And as this data spun off, if you want to pit stop it for a day, a week, a month, then you have to have an economical way to go do that. And I think you're going to find by the end of the presentation that you understand hard drives are the way to do that.

And then 5 gs gs. 5 gs is already here in some instances, but the application space has not really grown very much yet. It will. And 5 gs will just basically get a lot of data moving around the world. It will enable applications on the creation and consumption side that are like we've never seen before.

And when that data is moving around, some of it will have to be bit stopped as well. And the amount of data that we're talking about with 5 gs, up here it says 310 petabytes per day. As these days go by, people will need more and more mass storage devices to contain it. How do we think about the edge? I've already kind of made the case to you that if you need the data if cost efficiently, then you're going to make a decision based on that.

Now some people would say cost is not an issue for my application, maybe true. So architecturally, there are people who would say, this is the right solution for me, it doesn't include a distro. Fine, that's okay. But there are a lot of other ones that are very cost sensitive, especially as the data continues to grow. And one of the things Ravi will talk about is you can make a decision today, but be careful that that decision is actually future proofed for the data flow that's going to happen over the 5 years or 10 years that you own the asset.

Latency is another big issue. There's an assumption that people make about the cloud that it'll just all go to the cloud and that latency isn't a problem. If we start moving data around, say, with 5 gs, then the 5 gs networks will fill up very, very quickly. And power and other issues will become the constraint. It's kind of interesting.

And then at that point, latency becomes an issue. So you really do need proximity to your data if you're going to process a lot of AI and machine learning data. And then finally, we can all see this in the world, the ability to actually protect the data. I say this all the time, where's the data? And let's not talk about this in an abstract way.

I want to physically point to the data. I think there's a lot there's a lot of moves in the world going on where data will have to be localized for many different reasons, and I think you're aware of that. So what we've done is we've gone out and talked to a bunch of customers. End customers, these are sometimes customers of our customers. The world is used to a model now where people that make individual componentry, especially people like hard drives, get to talk to the end customers.

That changed during IT 3.0 as well. We've talked to various kinds of customers. We've done ride alongs. We've gone into hospitals. We've understood media and entertainment data flows where protection is very important and data is huge and various other customers.

And what we found is that the requirements that they have today are dramatically different than probably where they're going to go in 5 or 10 years. And they know Seagate is going to be around, so they need our help. It's actually an exciting time. So we'll talk about that throughout the course of the presentation. What does all that mean?

What does all that mean? Mass storage is going to be the quintessential driver of the HDD TAM. I think Gianluca will talk in his slides later about he'll give you some quantification. But suffice it to say that we'll go to a point by 2025 where 90 percent of the HDD storage is mass capacity storage. We do expect some of the remnants of 2.0 to still be around, but 3.0 itself will probably drive $24,000,000,000 worth of opportunity.

There's still some 2.0 hanging around. And then 4.0, we believe is another $3,000,000,000 to $6,000,000,000 worth of opportunity that exists in front of the company today. And we're positioning ourselves to be able to get after that. The interesting thing about that is I think that the learnings that we had through the transition from 2.0 to 3.0 actually apply pretty well. The reduction in product platforms, which I'll talk about in a second.

The understanding of the customer models. The data localization and speed requirements that we have performance requirements are actually interesting. So we're well positioned for the entire portfolio I think. 2.0, as long as the markets continue, we'll continue to ship to it. We believe we have the best portfolio there already.

And I know people make mistakes in the past about that's a bad product, this is a good product trying to read through us. All of the products we have we like. We'll continue to build them as long as our customers need them. We are here to serve their desires and their needs into their end markets. We don't really influence the end markets that way.

The 3.0, we position the company very well, both with the relationships, the key technology features that we need and then having the best TCO proposition as well.

Speaker 6

And the

Speaker 4

3.0 leverages into the 4.0, and it allows us the opportunity to go have those great conversations with those new customers. From a platform perspective, the way to look at this is, we believe for that huge swath of performance requirements all the way from very, very cold storage to relatively high performance storage, Seagate has the best cost per terabyte proposition in the market. And we also build systems where we can aggregate those drives into the systems. So we know we understand the feature set there. We can address the best cost per petabytes as well with some of those features.

And even further integration, which we're doing today, I think most of you know, we'll actually integrate the shelf too. And we're working on those features that makes us better and better and better, so we can drive the best cost per exabyte. I think this is why we're so excited. We believe that some of those exabytes will reside in the cloud, but some of them will grow at the edge, whether it's in private data centers, very small scale data centers, in telco points of presence or other places that we would call edge today. There are instances of the edge already.

Edge storage today is arguably more than 15% of the exabytes we ship. And you see this building in the data that we have, for example, from the distribution channel. You see how the distribution channel has shifted from white box PCs a few years ago to new products and a lot of industrials today, very interesting. Platform strategy is very important, and the pivot from 2.0 has been profound. We've reduced the number of product platforms that we have dramatically in the last few years.

This was our plan in FY 2016, but it's probably been accelerated by the dynamics of the market. We've concentrated our R and D locations. Some of that is not just because notebook drives changed over to cloud drives, but it's also because the preponderance of customer issues have changed as well. You think about Seagate, we don't just make the drive and then ship it to the customer and they try to integrate it. In many cases because of our history, they've told us, hey, please understand how we have to integrate into the system.

And so we do the testing for many of our customers. We do the qualification testing. We understand their field very well. We understand some of the issues that could happen if the drive doesn't behave well in the field. So not all about forward looking architectures.

And so when you start to narrow down the product set, you don't have to do as much qualification, systems integration, all these other issues that have traditionally been costing you a lot of OpEx. Also, from my perspective, underappreciated is the world's changing, the BIOS changes, the operating system changes, the chipset changes, the host bus adapter changes, the application space changes. We have to understand how to play against all of that. So we're a critical piece of the technology ecosystem for those customers. And that all that not because of us, but because of the complexity can inhibit the transitions.

We feel like we've done a great job of being able to pivot our resources through that. So we've consolidated the R and D sites. We've also consolidated the manufacturing sites. Remember, the growth will not necessarily be in drive count. It will be in heads and media, which is inside the drive.

We make all those ourselves. And so we'll have to make sure that we balance the types of factories that are required for that production with our existing footprint. But we've actually done that very well over the last few years. So we're ready to scale to address this future demand. We have the technology.

We are the aerial density leader. And we can talk about that. John will go into the details there. We have that in and of itself allows for total cost of ownership benefits. We've been doing that for 40 years.

And we're still quite proud of that, and we actually think we're on the cusp of a new transition that allows us to do it even more efficiently than we have for the last 10 years, which is exciting. We're also the performance leader. We were when we had 10 ks and 15 ks drives. We invented 7,200 10 ks and 15 ks. We know the interfaces.

We know how to work with the customers. We're the performance leader. So we'll talk a little bit about our dual actuator technology in that vein. And you can ask questions about that as well. The feature set that goes behind the cloud, the IT 3.0 is very important.

And in some sense, those features will be aggregated to be able to address 4.0 issues as well. So monitoring features, health monitoring features, security features, all kinds of other performance features will be leverageable in the future and we think leverage will quite well. And because of all that leverage, just talking about the history that we've just been through, we think we have a very strong financial model. From my perspective, there's been cyclicality at the bottom of the transition between 2.03.0. Some of that cyclicality is the cloud.

Some of that cyclicality is other macro things that are going on. It's not seasonality. I think that's important because if you go back 10 or 15 years in our industry, we talked about PCs and that was a little there was seasonality there. There is still some seasonality in the 4.0 markets, but it's more of what I would call cyclicality right now. And so if you look over the last few years, our operating income has grown, our free cash flow has grown, and our EPS has grown.

Generally speaking, a few years ago, our revenue was higher because we still had a significant component of the 2.0. Our products have pivoted though into the 3.0 and 4.0. And we've been able to do that very efficiently. This FY 2019, as we've talked about publicly, was a relatively tougher year because of some of these cyclicality issues. I think people understand that.

And when that happens, you make sure you can run your business for free cash flow very efficiently. I think we did a good job of that in FY 2019. Looking at these long term models then, I've been CEO for a couple of years and people ask, so how do you feel about the free cash flow? Are you confident in the free cash flow? Yes, I'm very confident in the free cash flow.

I'm confident enough to make the investments that we need to make in ourselves, in OpEx and CapEx. I think we have a great plan to address that in the future. And I think we can still commit to the shareholder, like we did before. Actually, in last October, we said 50% of our free cash flow will go back to the shareholder. But in the spirit of that, what I'd like to talk about right now is an increase in our dividend coming in January of 3%.

We're raising the quarterly dividend. That'll take us to $0.65 a quarter, $2.60 annually. And philosophically, how I feel about that is we should do a periodic review of the dividend and make sure that we reaffirm this all the time. So as CEO, it's I'm cognizant of the fact that we hadn't moved our dividend before. I'd like to announce that today and very proud of the faith that I have in the team and the cash flow that we have in the company and want to make sure that we continue to stay focused on the right shareholder return as well.

So we're targeting that increase periodically over time. Just a quick I know Shaney did this a little bit already, but just a quick walk through of what you'll see today. So you've already heard enough from me. Ravi is going to stand up and he's an interesting character. He's a CIO who actually has to manage a lot of data.

There's about 30 terabytes a day being created in the Seagate factories. In order to drive the efficiency that we want, you can only imagine with 1,000 and 1,000 of engineers, there's a lot of people saying, hey, if I only had this data. By the way, some of those decisions cannot be made with the data right there, because the cycle times are quite long. In a wafer process, for example, it could be months. And so the data has to be cached for quite a long time and coordinated properly.

So he has to attack all these challenges and he gets a lot of pressure from our operations units as you can well imagine. He has an interesting perspective on that. He has an interesting perspective on running a global business today. We listen to him. He's a key strategic part because we're not he's not just a CIO of a company who makes a product that has nothing to do with data.

He's a CIO that can actually inform us quite a bit. I'm pretty excited about having him come up and speak. And then John will stand up and speak, giving appreciation for our core technology. But more importantly, the takeaway from John, I think, is he's a voice who has had access to cloud customers for the last 10 years. Deep relationships and not just with 1 or 2, with many, geographically dispersed.

He understands their different application spaces probably better than anyone because he gets to actually talk to them about the future products. And so it's great to hear from John. And then Gianluca will come up afterwards and reinforce why we feel so strong about the model, okay? Before I hand the ball over to Ravi, though, thank you all for coming again. But we want to show a video.

The video is about our customers. And remember, I said, Ravi is kind of like customer 1 for us. If we can't satisfy our own CIO and making sure that we're doing the right thing from a security perspective or a cost perspective or a performance perspective, he'll let us know it. We have to listen to our customers going forward that their world is changing very quickly along the vectors that I talked about in IT 2, 3 and 4. And I we've built great partnerships over the last and that's fundamentally the platform that I'm proud of.

So let's cue that video and then we'll hand it over to Rami. Thanks.

Speaker 7

I think about what's happening today. We have entered a data era. We like to call it a data tsunami. We think the creation of data is unprecedented over the next several years. At Tencent, we are dealing with a lot of data.

Speaker 8

And the customer data is more important than anything to us.

Speaker 9

We've truly become a technology company that uses data to make our business better. The challenge is making sure that, that data is properly governed, managed, it's accessible to our users. And of course, that we store all that data in a cost effective way.

Speaker 7

And our customers are really searching for an easier answer. How do they move workloads and data from the edge to the core data centers or private cloud out to the public cloud.

Speaker 8

On Tencent Cloud Infrastructure, we use a lot of hard drives to power our cloud block storage, our cloud object storage and the big data platforms. In the cloud industry, actually, we really prefer to use a high capacity drive because we can save on the rack space and we can have better power efficiency and that can lead to a much better TCO.

Speaker 9

We've been crunching a significant amount of data to be able to create optimized using data, using advanced analytics, using prescriptive analytics in the way that we manage that data, we have been able to save in excess of $300,000,000 to $400,000,000 a year.

Speaker 7

We cannot service the needs of our customers with all of this data coming without continued innovation in disk drives and storage technology.

Speaker 9

So I count on partners like Seagate to continue to bring innovative solutions to the table in which we can find ways to store, manage our data more effectively and continue to do it at an appropriate cost.

Speaker 8

Seagate has always been a trusted partner to us. We are not only working on the hard drives, we also design system together that powers Tencent's infrastructure for internal services, also the cloud services. And for many years to come, we can see this trend cannot be changed. Hard Rock is going

Speaker 7

to stay there for a long, long time. We're the leader in infrastructure technology in the world and we count on Seagate's innovation to help us and to fuel that growth in the future and I look forward to for years

Speaker 8

to come.

Speaker 1

Good morning, everyone. In my 15 plus years of being CIO of various companies, I have been a customer of Seagate Products. I know the changing needs of a technology customer because I am one. The primary goal of IT in any organization is to help drive efficiency gains in labor, time and costs. As a CIO, I have to balance the desire of the organization to buy the next cool tool against the value that it generates.

Every CIO that you talk to will tell you that while there is great promise of gains with new technology, we have learned over the years to separate hype from reality. A well run IT organization is tuned to drive cost efficiencies at all times. If we look at the priorities of CIOs from a decade ago, it was cost, support, scalability and interoperability. If you look at the costs if you look at the priorities today, you will notice that over the years, there is one element that has remained constant. It is the focus on cost, cost efficiencies of IT services and the resulting productivity gains are front and center for CIOs.

The change of other priorities is not surprising. People have moved to an experience based world. CIOs have jettisoned the grunt work of managing data centers, allowing them to focus on business growth, measurable outcomes and privacy requirements. Seagate's storage solutions are aligned with these changing priorities of CIOs. In fact, my team and I work very closely with our product development teams to ensure there is ongoing customer feedback with regards to features and requirements of our products.

The promise of the public cloud was about addressing these changing priorities of CIOs by reinventing and simplifying architectures, allowing easier adoption lowering the total cost of ownership by leveraging economies of scale and investing in security on a scale no CIO could possibly think of. Now while the promise of the public cloud is still strong and continues to grow, those organizations that were early movers to the cloud have reached a stage of cloud maturity where it is clear that the public cloud is not the answer for all workloads. Don't get me wrong, the public cloud is here to stay and will continue to grow. We will see strong adoption in the foreseeable future. But CIOs are now able to determine the most optimal location for any workload based on a slew of parameters like TCO models, cost of cloud lock in, latency challenges for certain workloads and newer regulations around data privacy.

This is driving workload migration. Some workloads are repatriating from the public cloud back to on prem and private cloud data centers. Other workloads are moving closer to the consumer at the edge, giving rise to edge and micro edge cloud data centers. I actually have an example that I'll talk about in a few minutes on this. The drivers for edge migration are bandwidth costs, latency, and a need for real time analytics in mission critical environments like hospitals and factories.

The drivers for repatriation are costs, data privacy and compliance with data localization regulations. As data creation continues to explode with every single day and as CIOs choose not to delete data anymore, storage capacity growth will struggle to keep pace with the demands. We will see storage needs grow in all three segments public, private, and edge ecosystems. More importantly, as customers like myself try and balance the workload distribution between these three segments, there will be significant attention given to the underlying storage media and the associated costs. In my experience, 80% to 90% of application workload requirements are met by today's HDD performance.

With 10% to 20% of specialized workloads requiring higher performance storage media. This, along with the TCO advantages of HDD, will play a bigger role in decision making as storage capacity requirements skyrocket. Looking closely now at which workloads will drive the growth of each segment, you will notice large volume data intensive workloads like AI, ML, backup and disaster recovery will repatriate and grow in the on prem and private cloud data centers. This will be driven purely by TCO. The growing costs that CIOs are dealing with in the public cloud come from these large volume data workloads.

In the public cloud, we incur storage costs every second of every day as data once stored usually stays undeleted. Further, the amount of data only grows, and hence these costs show a rapid growth profile. This is the primary driver for big data workload repatriation to on prem and private cloud data centers. Let me tell you about my recent experience. A couple of years ago we reduced our cost by 40% by migrating our big data Hadoop cluster from our on prem data center to the public cloud.

This cost saving was a result of adopting a cloud native architecture and driving discipline that the cloud demands. As we looked at this optimized architecture of the cloud, we realized that over the next 5 years, our costs are going to significantly increase with the projected data growth. We generate 30 terabytes of data every day in our factories. A large part of this data is thrown away because it just costs too much to store it in the cloud. There is value in this data and we would like to store it for a longer period of time so that we can analyze it to help improve our operations.

We modeled running this new re architected environment in a private cloud setting and realized that repatriating this workload from the public cloud back to our on prem private cloud setting will give us a further 5x cost savings. Here is what is becoming clear. Storage requirements of each ecosystem are different. The edge requires resilient solutions. Servicing at the edge is time and cost intensive.

The public cloud looks at dollars per gigabyte, and hence the highest capacity HDDs are foundational to the public cloud's business model. And lastly, the on prem and private cloud data centers where large volumes of data will be stored require high density storage offerings. As a customer, I look at Seagate with our strong portfolio of HDDs and systems addressing the needs of CIOs in the public cloud, in the private cloud and the edge data centers seamless and secure data integration and data movement across this hybrid ecosystem will become an important decision point for CIOs as they look to address the changing priorities. Solutions that provide secure data storage, cost and management efficiencies, along with user experience and business outcomes will lead the data storage market. So in summary, storage needs continue to grow.

PCO plays a key role in storage decisions, And mass capacity storage will best address the exponential demand growth that IT organizations are gearing up for. Thank

Speaker 8

you. We're going

Speaker 2

to take a 20 minute break for those of you that are in the audience and you if you haven't had a chance to already, I would encourage you in the adjacent room to check out our product showcase. And for everyone else on the webcast, we'll be back in about 20 minutes, which I think is 10:25 Eastern Time.

Speaker 1

Ladies and gentlemen, please take your seats. Our program is about to resume. Please welcome Seagate Chief Technology Officer, John

Speaker 6

Morris. Welcome back, everybody. Happy to be here. Just wanted to spend some time now talking about scale and efficiency for storage, and we'll get into a lot of details a little bit about what that's all about. I've been at Seagate for about 23 years now, and I've spent time with our R and D teams, product development, manufacturing and product management over those years.

And in particular, in the last 10 years, as Dave mentioned earlier, I've spent considerable time with our enterprise and cloud customers, really trying to immerse myself in the problems they face and they struggle with as they attempt to scale data and storage in their infrastructure. And the purpose for that is to ensure that as a company, we are able to fully unleash our R and D and manufacturing capability to solve those real industry problems associated with scale and efficiency. So with that in mind, I'm going to lead with a quote from Google, one of the world's leading cloud companies. And here they said, this was in a conference they presented at last year, HDDs continue to be competitive and critical for the cloud bulk storage tier for the visible horizon. And there's a lot of reasons for that.

And the criticality of hard disks to solve their problems goes beyond storage cost per bit type efficiency because in a cloud environment, drives also provide a lot of the performance capability that exists in the cloud infrastructure. So I'm going to walk through a framework to talk through just why this is the case. But this is an important theme here and it's really central thesis for Seagate is hard disks are the best platform for scale and efficiency and bulk storage, and we have a plan and a path to deliver that on that thesis over the next decade. And that's what I'll be covering today. So here's the framework that we're going to use throughout the presentation today.

And it's really simple and it focuses on a few key attributes, but it turns out simple things are often powerful and that it just helps gel ideas about why something is important and has value and it helps us focus on what's truly relevant in the market today. So those simple things I'm going to talk about today are on the left side of the screen here. What do our customers care about from storage? Well, 1st and foremost, you saw all of the earlier presentations covering this massive explosion of data creation in the data sphere, how our company is going to achieve are going to meet the demands, the storage demands associated with all that data creation. So effective and efficient scale of their capacity and storage growth.

2nd, they need to actually use those bytes. The bytes don't sit there forever and in a state of disuse. Applications cycle through that data, which means they need efficient and usable access to that data over the life of that data and achieving the service level agreements that they need to deliver on with their users and their applications. So that's make the capacity usable over the full life cycle of that data. And the last item to talk about from the perspective of a data center is their total cost of ownership.

And I'll present some metrics that are used for that today, but how it all rolls up from their perspective, how they make decisions when they deploy large scale infrastructure. In our case, we focus in a number of areas: areal density to achieve capacity growth as well as cost efficiency and capacity. And what's changing for us is how we address performance. In the past, for example, in the mission critical portfolio, we used to scale performance by doing things like we'd shrink the diameter of a disk and we'd spin it faster and that allowed us to seek faster, and it allowed faster access to the data latency to get to the data. The issue with that architecture is we've shrunk the size of the disk.

We actually increased the cost of storage and that doesn't work anymore. We have to come up with a new framework, a new design space to solve scalable performance in the era of high capacity demand. And we do that by reintroducing parallelism into disk drives. In our case, in our first platform, we'll be introducing a 2 actuator disk drive, essentially doubling performance for all relevant metrics in a drive without sacrificing capacity at all. It's the most efficient way to scale performance and capacity and do it in a power efficient way.

And if I on the last item there, when I focus on total cost of ownership from a storage supplier perspective, the most important element of that is dollar per terabyte, but we'll see that there are a couple other metrics that matter as well, but the principal metric is dollar per terabyte. So let's look at the history here in areal density. So what you see here is it's the last 20, 30 years of areal density achievements for our industry. All the points represent various products that have been released over time. And I'm highlighting here the period we're in right now, which is the period where we had longitudinal and perpendicular recording.

Well, the demand for storage is growing. And historically, we through aerial density growth, we capitalize or take it full advantage of the underlying recording physics, and we're able to grow aerial density over time. But periodically, we are unable to evolve aerial density with a particular recording technology, and we have to innovate and deploy something new. And if you look, there's been several periods in the last 30 years where the industry has done just that. Around right around year 1998, we introduced as an industry magnetoresistive readers, which injected new life in the longitudinal recording.

And you can see it occurred. There was an inflection point growth with longitudinal media with prior reader technology kind of slowed down. But with the introduction of this new technology, it generated new life for longitudinal recording, and it spurred about a decade of pretty intense growth for the industry. We go into another period with Perpendicular. You can see that at around 2,005, the industry made a transition from longitudinal to perpendicular recording right around when we started losing capability to grow and scale with longitudinal, and it created another period.

In this case, it's been about 15 years of growth, about 10x increase in aerial density over the last 15 years. But you can see we're in 2019, we're approaching a relatively low growth period of perpendicular recording and we need to do something new. That's where Seagate and Pioneering heat assisted magnetic recording have positioned a new recording technology that's going to allow us to continue to grow. The dotted lines on the right show kind of the trajectory that Perpendicular would be on, the line on the bottom and the trajectory that we are on with HAMR. And I'll get into some of those details in the next few slides.

To make it HAMR a reality, we have to address some fundamentals in the drive, which are the media, what do we need to do to take advantage of it in our media, what do we need to do to take advantage of it with our heads, And then how do we achieve volume scale with the technology so that we can serve all those exabytes that are out there in the world? So here you'll see a video and I'll go through the next two slides have 2 different videos just to highlight some of the structures that are deep inside a disk drive that nobody really ever, I guess, to take a look at. But what you can see here on the bottom of the video, there's a single track of media. Above it is a rider, a perpendicular rider in this case, and it's riding bits. They're showing up as blue or red marks on the media.

And if you look at those blue and red marks, there's a whole bunch of irregular shapes. Those irregular shapes are individual grains, magnetic grains in the media. And when we write them, we basically set the magnetic state to the state that we want, and it has the bit of data that you're trying to store on the drive. There is an artistic representation here just to highlight. One of the challenges as you push aerial density is you have to continue to shrink those grains smaller and smaller in order to achieve the smaller bit sizes, And you reach a point of trade off where making the bits too small results in them having some magnetic instability or noise associated with the difficulty of preserving the strength of the magnetic information in the media as the grains get smaller and smaller.

And this is the fundamental design constraint that we're facing now with scaling perpendicular recording. What you'll see on the top right is a small section of small grain size media. It's 5.6 nanometer iron platinum hammer media, and I've put 2 black rectangles there. One rectangle would be the bit size on that media for a 16 terabyte drive, and then below that is the bit size for a 32 terabyte drive. And we would like to see about 10 grains per bit to have the kind of noise characteristics that we want in the media.

And you'll see if you counted them out, there are about 10 grains in that smaller one. So this media would be a candidate media for supporting a 32 terabyte drive. And I'll get into a little bit more on the next slide some of the other ingredients for being able to do that. But the point is, we have a media technology that lets us shrink the grains to the sizes that we need to support much higher capacity drives. 1 of the key elements of that is the introduction of iron platinum as the principal alloy for the media, which has the characteristic that it is much higher coercivity.

That translates into the ability to store in a stable way the bits for these small grain sizes. The challenge with iron platinum is that it is because it is so stable magnetically, it's challenging to write. And that gets us into the next big ingredient, which is our head technology. So here's another similar video. It's going to show a transition from a perpendicular rider.

The introduction, what was just shown up there was the introduction of a near field transducer that we use to assist the right. And what happens here is we shine a laser on this near field transducer. It creates an electric field resonance on that near field transducer that couples into the media and it create there's an energy exchange that results from that coupling. That energy exchange allows us to take advantage of a property of magnetic alloys that if you heat them up above their curie temperature, their coercivity drops, lets us write the media with a normal perpendicular writer, which is the same structure we saw before. But then it cools, And as the media cools, the coercivity goes back up to this high, very stable state.

And therefore, we have now a robust, stable transfer of data onto the media. And that whole process is super fast. It takes about 2 nanoseconds to heat and cool. It's a very fast thermal process. But the key point is with the introduction of these two design changes, the media with iron platinum and the head with the near field transducer, lets us break out of the design space that we're in today with perpendicular and it gives us another opportunity to scale over the next 15 years our areal density.

And this is the central ingredient of our forward looking road map is to capitalize on the innovation in the media and the head to let us to continue to grow aerial density in a very robust and healthy way over the next 15 years to serve all the demand for capacity that's out there. Now a couple of other things I'd like to highlight. If we take that 32 terabyte example that I showed on the previous slide, it'll be a little bit over 2 terabit per square inch, shipping which is about 2x where shipping products are today. In the Crosstrek direction, it will be about 30 nanometers, about 10 nanometers in the bit direction. And it's kind of fun occasionally to think about, well, okay, let's imagine, what does that mean?

When the bits are that size, can I relate that to something else? Well, as we write these tracks that are about 30 nanometers in the cross track direction, we actually have to position our rider at about plusminus10 percent of that. So, plusminus3 nanometers, which is hard to visualize that size. But think about this, the diameter of a carbon atom is about 1.4 angstroms. So we're talking about plusminus 40 to 45 carbon atoms worth of positioning requirement to read and write.

I mean, this is kind of going on quietly behind the scenes when you read and write data in your drive, but it's pretty exciting to think about. There's this incredible precise mechanical system that's working behind the scenes to make sure all of our information gets in and out of the drive in a safe and reliable way. So now we see our 20 terabyte drive, and I just want to give you a little bit of background for it. And in the room later on today, you can see and talk to us about it a little bit more, but we've been working on HAMR a long time, nearly 20 years from staging all of the R and D associated with the media, the heads, waveguides, all the sort of things that have to work. And we're about a year a little over a year away from having drives ready for qualification at the up to this point in time.

We've built millions up to this point in time. We've built millions of HAMR heads over the years. And this is all in order to fully exercise our supply chain, ensure that we have the ability to scale volume with all of this new technology in a way that's going to be seamless and easy for our customers to adopt. We have drives operational in our labs, but more importantly, we've delivered drives to our partners, and they've begun evaluation of these drives, and we've delivered systems with fully operational drives as well. So we're well on our way to achieving our objective of producing this 20 terabyte first generation HAMR platform in about a little over a year.

And I am really excited to talk about the confidence that our customer base, our partners have our technology as well. Cray recently back in June made an announcement that they were selected by the U. S. Department of Energy to build Frontier, a new supercomputer, which will debut in calendar 2021 and expected to be one of the world's most powerful supercomputers. It'll have over 1.5 exaflops of compute power.

It will have the ability to move data in and out of its architecture at about 10 terabytes per second aggregate throughput, and it'll have over 1 exabyte of raw capacity fueling that 1.5 exaflops of compute power. Well, it turns out that Cray surveyed all the technology for that 1 exabyte back end that it needs, And they've partnered with Seagate to supply that with our HAMR hard drive technology because it offers the highest capacity and the most efficient in terms of terabytes per slot in their storage system as well as the lowest watts per terabyte for that storage back end. It will comprise about just under 90% of the total 1 exabyte of capacity for the system, and it'll be realized that roughly 90% will be realized with over 47,020 terabyte HAMR drives and over 400 Exos JBOD systems that they'll be integrating to supply all of this back end storage. Now in Cray's announcement, they said that this is going to help them lead the way into the exascale era. Their quote was, it's an era where we're not only transforming how we compute, we're transforming what we can solve, discover and achieve.

And this is a central theme of all of the explosion of data, the data sphere and the applications around it are that all of that data and efficient access to that data is helping people solve new problems that will dramatically affect how we live over the next 10, 20 years. Well, we're we at Seagate, we're very proud to participate in this launch and being able to supply all of that storage with our new HAMR technology. So what's this going to look like? We've shown our perpendicular trajectory and while there's still growth, it has slowed down. We have launched our 16 terabyte platform, and we're in volume production of that platform.

In the next year or so, we'll see 18 terabyte conventional perpendicular drives in the market and we'll also see other variants like shingled versions of those drives at higher capacity like 20. That'll happen that'll play out next year. But the big transition is coming. We will be launching our 20 terabyte HAMR platform, our 1st generation platform, at the end of calendar 2020, and it's going to usher in an era of growth that is larger than what we've seen for the last 10 years and perpendicular. It will manifest over the next 5 to 7 years with a 30 plus terabyte drive in calendar 2023 and a 50 plus terabyte drive in calendar 2026.

And it's interesting to note that we called this back in 2013. Dave had mentioned in 2013 forum that Seagate would be launching a 20 terabyte drive by 2020, and we are poised to deliver on that prediction in the next year. And as I highlighted with the 30% and the 50%, we'll continue to evolve capacity with this technology at above a 20% CAGR over the next decade. So now that there's platforms with high capacity and growth in our future, we need to think start thinking and continue to think about the performance aspects of that capacity. So in particular, we need to scale performance with capacity.

We need to provide optionality in our portfolio to ensure that, that capacity is actually usable capacity for the intended applications that it's being integrated for. While sequential performance has scaled as we've increased drive capacity, there's really been little to no improvement in random performance. And large data centers continue to depend on a fairly significant portion of their workload as random IO requests. And so we'll look at how do we address the scale and random performance that these data centers need over the next couple of slides. So first, it's convenient to talk about it's a relatively simplistic framework, but it makes it a little bit easier to understand what actually happens behind the scenes in a data center.

So data centers tend to think about service levels in terms of latency, and there's 3 broad tiers of latency that are often taken into consideration for the storage. First, I'll talk about latencies that are significantly greater than about 100 milliseconds. Generally, you would associate these type of latencies with an offline or archival element of the data center. And typically, there's the storage technologies that would get applied to solve storage for this service level would take advantage of disk drives, take advantage of tape drives and, in some cases, optical technologies. Then we move into a middle band latency expectations in the 1 to 100 millisecond regime.

And this is really the sweet spot for disk drives as well as the sweet spot for storage in a data center. Generally speaking, we would say as much as 90%, in some cases more of a data center storage sits with this type of service level requirement. And it is principally served with disk drives today and in the future. The next service tier that tends to get talked about is sub-one millisecond latency requirements. This is very typically very random in nature, and it's where flash technology, SSDs as well as DRAM solves the problem for a data center.

So what you have here is an architecture that because of these service levels and the various the ratios of IO access and the different sizes of storage required in each of these tiers results in storage architectures being tiered. And those tiers are going to have DRAM, flash, hard disk and tape, typically to solve the aggregate problem. At the bottom is a rough qualitative graph showing about how many exabytes live in each tier. And as I mentioned before, that middle section is, let's say, 80% to 90% of the total storage required for the data center. And there's a reason for that based on the total cost of ownership of that storage.

So now, if we talk about how we're going to address scaling performance without sacrificing the scale of capacity, it's convenient to look at a metric that we worked on with 1 of our customers and that metric is IOPS per terabyte. And there's a rule of thumb, it's not a rigid set and concrete requirement, but a rule of thumb that there's a minimum IOPS per terabyte that they would like to achieve with bulk storage in order for that to fit in nicely with the architecture. And there's an optimal and the optimal is largely predicated on how these architectures evolved back to the days where we had 8 terabyte drives. So we've shown the window here, and what I've plotted on the bottom is what happens to IOPS per terabyte with our current single actuator family of drives as we continue to increase capacity. And what you'll see is that at about 20 terabytes, we fall below this rule of thumb target for IOPS per terabyte, at which point there's a need to provide a more efficient alternative to integrating these high capacity drives and data centers that have performance constraints.

And that is the point when we will be supplying our dual actuator drive. So we have our first platform showing up at 14 terabytes. We have follow on platforms in the 20 plus terabyte regime. And we're in the process now working with our partners to create the ecosystem that will allow these drives to be integrated into their architecture, so that when we need them, everything is ready. And we'll see it in the demo room as well, but we've launched our Mach 2 dual actuator drive into the datasphere.

It's 14 terabyte conventional perpendicular drive, also has a 16 terabyte shingled version. It offers twice the performance of a single actuator drive, about twice the sequential bandwidth at 5 20 megabytes per second and also about twice the IOPS. This provides another option that lets data centers architect the best solution to their capacity and performance requirements, and it delivers efficiency in those applications that actuator drive. And we're in qualification and shipping this product to multiple customers today. So then we'll get back to total storage cost management or total cost of ownership.

We'll go through a few considerations. So the first is all around dollar per terabyte. So I started off today talking about the disk centric architecture for cloud, and I referenced an important quote from Google about their position on this as well. But I also showed that even in the performance world of supercomputers, the disk centric architecture is the best solution to their problem. And that's going to continue going forward and here are the reasons why.

First, the areal density CAGR or annual growth rate for capacity is driving cost per bit improvement for hard disk that will continue over the next 15 years, and we estimate the ability to provide over a 20% compound annual growth rate in areal density with our HAMR technology. This contrasts with the last several years, going back to 2012 with perpendicular recording, where the CAGR averaged over that 7 year period has been just under 10%. So, we expect to see a significant improvement in CAGR when we launch HAMR, and we expect to maintain that CAGR over the next 15 years. Equally important is the relationship with Flash. This CAGR projection for our HAMR technology is roughly in line with projections for the flash industry.

So we expect to see an equilibrium persist in cost per bit between flash and disk. And so the architectures that I described today with tiering of data are expected to persist as well in rough equilibrium. And therefore, we expect to see 80 plus percent of the mass capacity exabytes remain on disk in the foreseeable future because of this. When we talk about performance, we have the best portfolio as well. On the left side is a rough model for how total cost of ownership occurs with a large data center, and they have metrics like cost in terms of dollar per terabyte, performance in terms of dollar per IOP and power in terms of dollar per watt.

And our objective in our portfolio is to ensure that we provide the best optionality for data centers to solve the challenges they have in scaling their capacity performance and doing that scale with the most efficient power footprint. And we do that by providing the 2 families of drives. We have our single actuator HAMR platform, which delivers the lowest dollar per terabyte as well as terabyte per watt for a disk drive. And we have our dual actuator performance version of that drive, which delivers the best IOPS per watt or megabytes per second per watt if you're focused on throughput. It's this combination of drives that allows a large data center to architect its tiers in the most efficient way solve the scale issues that they have in terms of capacity and performance and do it with the most efficient footprint and power.

A big part of our ability to stage the right technology and instantiate that technology in our portfolio is our engagements with our customers, partners and the industry. It's important for us to listen and collaborate and contribute in ways that solve real industry problems. And we do this with our engagements across our customer base, with the consortia that represent those customers and in the open source community. And we're very solid in these areas in both IT2 and IT3 as well as our growing engagements for the Edge and IT4. This shapes our collaborations and it really drives our portfolio to where it needs to be to be the best provider of options for their storage and data management challenges.

And in summary, that gets us to our leadership in innovation to serve the data and storage communities. It starts with aerial density leadership. We'll continue to drive aerial density with our HAMR technology. We expect to see about a 10x growth in aerial density over the next 15 years, taking us from about 1 terabits to 10 terabits per square inch. We'll continue to focus on efficient scale and performance with our Mach 2 dual actuator technology and allow customers to efficiently scale capacity while meeting their application performance needs.

And we're committed to delivering the lowest total cost of ownership solutions in storage in terms of dollar per terabyte, but equally in terms of the performance and the power efficiency of those solutions. We have deep knowledge of system architecture and data movement, and we're partnering with our customers to ensure that we can apply that knowledge to solve the problems that they care about in their future data and storage needs. And with that, I'd like to invite Gianluca, our CFO, up to the stage. Thank

Speaker 8

you.

Speaker 10

Good morning, and thank you for coming here today. So today, I will discuss 3 main subjects. The first one is about the impact to our FQ1 guidance of the change in useful life for certain of our capital equipment. In the second part of the material, that is the bulk of the presentation, we will discuss how we create value and why we are confident that Seagate will be successful in the storage industry in the future. And finally, we will discuss about capital allocation strategy and the long term financial model, that I know you are waiting for.

Okay. Let's talk about the impact to our FQ1. In early August, we guided revenue and EPS. Revenue, dollars $2,550,000,000 plus or minus 5%. We have no change to the revenue guidance.

The only change is related to the lower depreciation amount. In FQ1, it is about $25,000,000 or $0.09 a share. So our EPS that we guided at $0.90 plus or minus 5%, is now updated to $0.99 plus or minus 5%. There are no other updates to the guidance. Okay.

Let's now discuss about long term value. Today, we operate in an industry that has structurally improved a lot in the last 15 or 20 years, moving from more than 10 producer to just 3 players. On top of that, we are now focusing in optimizing our profitability and our cash flow through the value cycle that is normal for the storage industry. So this dynamic is enabling us to create and enhance value for all our stakeholders. So for the shareholders first, but also for our customers, for our suppliers and our employees.

So let's look at how we improve our market position. In that period of time of consolidation and of strong demand for exabyte volume. The industry in that period of time grew at a CAGR of about 31%. CAGR grew by 35%. Organically and inorganically, and that helped us to increase our market share from a solid 30% in 2003 to a very good 45% at the end of calendar year 2018.

And as I said before, this happened in a period of time where the demand was very strong. Sometimes you see this consolidation and improved market share in a market that is declining demand and declining volume demand. This is a market where the volume demand is extremely strong. And Seagate was shipping 4 exabyte in calendar year 2003. And in the last calendar year, we shipped 367 exabyte.

So it's a huge increase for us and for the industry. So on top of focusing on increasing our market share and making our market position stronger, we also look at how to improve our internal efficiency. First of all, in term of manufacturing footprint. And when we presented to investor last time, so in fiscal year 2016, we were producing in 14 different sites. Despite the major increase in volume, during the last 4 years, we were able to consolidate our manufacturing footprint to only 7 sites.

So the increase in volume in fewer sites has helped us in reducing our cost per terabyte. So in the next slide, I will show you how this cost decline went through the various cycle every time we were able to double the capacity for our hard disk drive. So when we move from half terabyte to 1 terabyte, our cost was declining in the low 20%. When we moved out up in capacity to 1 terabyte, 2 terabyte, 4 terabyte, 8 terabyte, The cost decline always improved up to the mid-forty percent. Moving to the 16 terabyte that we are shipping today.

We still have a very good cost decline of above 30%. So it's still very strong cost decline, But it's also showing a slow in the pace of this cost decline. And this is one of the reason why AMR is very important. With AMR, we expect out this cost decline to go back into mid-forty percent when we double the capacity of the drive. On top of focusing on manufacturing spending and cost per terabyte, We also look at our efficiency in terms of OpEx.

And Dave briefly touched this point in his material. So we streamlined our product road map and we focus mainly on mass capacity storage. We also reduced the number of sites where our R and D team was operating, putting this team together for a better collaboration, better efficiency. And in terms of G and A, we also used much more internal share service center compared to what we were doing before. So this helped us to reduce the cost from €2,200,000,000 a year to €1,400,000,000 a year in the last fiscal year.

So this is a great 33% reduction in OpEx. So talking about R and D and product roadmap. Today, we are leading the industry with our 16 terabyte that, as I said, we are already shipping. We expect for the early part of the calendar year 2020 to have an 18 terabyte and a 20 terabyte in PMR and SMR technology. By the end of calendar year 2020, we will also have our 20 terabyte AMR.

A very important benefit of AMR technology is that we believe AMR will be able to scale up in capacity to 30 terabytes, 50 terabytes and above in the next few years. So what is our expectation for the demand for our products? As I said before, the industry was growing of a CAGR of about 31%, and we were doing better than that. The expectation for the next few years for the mass capacity storage part of the business is to grow by a CAGR of 35%. And we also expect the mass capacity storage to be almost 90% of the volume for the entire RDSK industry.

So PSF, very strong demand, especially on that part of the segment. So we have looked at different things. We have established that we are a very low cost provider. We have established that we are leading the product technology roadmap and that we expect to continue to lead in the future and that our outlook for the demand for our products and in general for the industry is also very strong. Those are the elements that are making us very confident that we will be successful in this industry for the future.

So let's look at how we financially performed during the last few years. 1st of all, in term of total shareholder return. If you look at the slide, the S and P 500 index had a return of about 54%. Seagate had a return of almost 150%. And if you look at our peer competitors, we are in the top quartile of this group.

So a very strong return for our shareholders. And in term of return on invested capital, When we presented here in fiscal year 'sixteen, we were in the 3rd quartile of the S and P 500 group. At the end of fiscal year 2019, we were in the top 20%. So this is, in my opinion, a remarkable result. And this is something that we continue to be focused on and we want to continue to focus in the future.

Now the past is very important. The future is more important. And I think the present is a first step to build the future. So let's look at what we expect for fiscal year 2020. At our earnings release, Dave already said that we expect revenue to be sequentially higher, so to have revenue in fiscal year 2020 higher than what it was in fiscal year 2019.

This increase in revenue will also drive an increase in our operating income. And despite a higher CapEx, mainly related to AAMR and the ramp of the AAMR volume, we still expect a very consistent and strong free cash flow through the fiscal year 2020. The improvement in the business condition, the expectation for our performance and our execution, the execution of the share buyback program that we have in place. All this will contribute to an earning per share in fiscal year 2020 that will be higher than what was in fiscal year 2019. And let's now look at our capital strategy for the future.

Because we are so confident that we will be successful in this business, our first priority is to support the business. So our CapEx, our OpEx, but as we have shown you, we have already optimized and we are continuing to optimize. We will also look at strategic investment if we have an opportunity arising. Our second priority is dividend, and Dave announced the increase in dividend. And he also announced our plan to review over time the opportunity to do even more dividend increase.

Our 3rd priority is the share repurchase program. We have an open authorization at the end of fiscal year 2019. We still had more than 2,000,000,000 dollars available. We are executing this program. We will be opportunistic.

So depending from where the share price is and how attractive it is, but we for sure want to continue to fulfill this authorization that we have. And we will do all this maintaining a minimum liquidity of about $2,000,000,000 and we will continue also to optimize our debt structure in line with our EBITDA generation. And if you look at the last 4 years, you can see how we executed through our priorities. We spent $2,000,000,000 for CapEx. We paid about $2,700,000,000 for dividend and we executed a share buyback program for almost $3,000,000,000 So I think we are executing through our priorities exactly as we are seeing.

And now let's look at the long term financial model. So based on what we have seen, based on our product roadmap, based on our outlook for the demand of those products, we expect revenue to grow between 2% 6%. We expect operating margin to be in the 13% to 16% of revenue range. CapEx that we already guided at our last earnings release to be between 6% 8% of revenue. And we want to still commit for at least 50% return to shareholder of our free cash flow.

And now I will ask Dave to come back to stage for closing remarks. Thank you.

Speaker 4

So I hope that's given you a good overview of the company, how we've moved in the last 3 years since the last Analyst Day and where we're focused for the future. I think the pivot in the platforms is one of the big takeaways. We think we've got the company in the right fighting shape for what's coming in the growth of data. When I say sustainability, I think integrity. I think long term.

We have to do what our customers need. We have to do what the market needs in a very predictable fashion. It's important for us to continue to drive our technology. Imagine if it slows down. Imagine the impact that that has.

It's important for us to stay focused. So sustainability ties through all those presentations: customers, technology investment, financial strength. There is growth coming. The growth that will fundamentally drive us is the growth of data. We're excited about it actually.

And the decentralization of that data will probably drive more devices than we know. But you have to listen very carefully to the customers and the markets in order to engineer the right solutions. The time frame for the engineering is actually long as you can see in some of these things. So we have to make sure that we stage the world right. We are building products for FY 2025 today for that 17 zettabytes that needs to be installed.

That's products that are coming off our factory lines today. They have to last that long. The product dynamics are changing quite a bit. And I think we've done a good job of that. And all that gets down into we understand our fundamental value creation.

We understand what a good investment back in Seagate is. We make those investments and then we return the rest to the shareholders. That's why I believe Seagate is built for sustainability and built to last. Okay. With that, I'd like to invite the team back up onto the stage.

We'll start the Q and A. I think some of the Seagate ambassadors will be running around with microphones. Make sure that you have a microphone in front of you before you start with your questions.

Speaker 11

Somebody's

Speaker 4

got a microphone. Okay, go ahead.

Speaker 12

Hey, thanks a lot for all this. Nanda Baruah, Loop Capital. Just a couple if I could just to start with, you guys have talked in the past about believing that sort of the gross margin range, really the economic model should continue to expand for the industry in general, that includes gross margin range. And then a lot of talk today about sort of import of new technology in the coming years. Can you sort of put those 2 together around potential for gross margin range to go up?

And then sub question to that is, if sort of say HAMR, right, new technologies are so important for hyperscalers for Edge, might that also give you some ability to have a new kind of pricing conversation with these guys as well since it's important for them?

Speaker 8

And I

Speaker 12

have a follow-up after that.

Speaker 4

A couple of points and I'll let Gianluca chime in here as well. We have increased the range over time. If you go back historically in Seagate, before we were at the peak of client servers, we had a different margin range. And the way I think about this is how do we forward look about the business and such that we're modeling our own business. And so I think it's helpful.

At the point that we're at right now, I think we're still client server still coming down somewhat. And so

Speaker 5

that's one

Speaker 4

of the reasons why we popped out of the range a little bit in a few quarters. Last year, we were at the bottom of our range. So, I'm not disappointed in the overall performance, but I think that's how I think about the range planning for the future. When you get into quarter, this is where a lot of people get tripped up. When we get in quarter, if there's good profit, even if it's outside the range, we'll take it.

To your point, and I think it's not so much about technology, but it's also about the manufacturing investment. When we get to a point where we have to invest a lot more for manufacturing, that's the first point of stabilization, I think. And so for example, the number of heads per drive that we talked about, when that increases and we'll have to make investments in our own factories which we forecast then we would have to see that kind of economic return as well. The point about the commentary with the customers, the customers are maturing. When there's when you're in a period where you have over supply because of that installed base that was that is being repurposed very quickly, it's a different world.

When you're at a point where there are constraints now and some of the timelines are very long then the customer discussions will mature and I think it will happen. So Jean Luc, do you want to add?

Speaker 10

Yes. I know during my presentation, I was trying to convey the message that our focus is on profitability. So gross profit, operating profit more than a margin percentage. So we really focus on EPS and cash flow. And we will drive our gross margin where it has to be in order to optimize EPS and cash flow.

Speaker 12

Okay, great. Thanks. And then just real quick follow-up. The dividend increase is great. Dave, you mentioned periodic increases.

Could that be annual increases?

Speaker 4

We'll review it annually. But of course the periods we've been in been through in the last couple of years have been fairly volatile. So I won't commit an annual increase, but we will periodically review. And it's I think this is more of an intention of what I want. We understand the value of the dividend.

And what we want is to drive the cash flow high enough and to be able to continue to return that. I have confidence in the cash flow. So at this point, that's why I would say that.

Speaker 10

Thank you.

Speaker 13

Katie Huberty, Morgan Stanley. A lot of investors look at the lack of revenue growth over the past many years and are skeptical when you talk about 2% to 6% revenue growth going forward. So I just want to make sure we understand what the drivers of that. What I heard from today is that there has been a utilization headwind going from client server to cloud that is less going forward. And then you see data growth inflecting and accelerating to that 35% versus 31%.

So those two dynamics are encouraging. But are you also assuming in the 2% to 6% growth market share gains? Are there any other major factors that we should think about that can help investors build confidence around growth going forward when we haven't seen that in the model in recent years?

Speaker 4

Yes. I think if I look back for example FY 2017 to FY 2018, we did grow revenue. It was in Gianluca's slides. FY 2019 has had its own challenges not necessarily because of us or our industry but because of other factors out there and some of it was cloud cyclicality. We've noticed that cloud cyclicality has changed every cycle, right?

So the cloud is becoming a more diverse place and it is growing. So this time making sure we're investing for the next peak of the cycle is important. So I think it's important to view it on a different scale than annually. But we do think that because of the changing in the customer base toward more heads of media rich products, which we talked about, we will see some stabilization there and opportunity for revenue as well for revenue stabilization. And then it gets on to the data growth piece.

And once data starts to grow and the critical components are constrained and the dynamics change to the earlier question, I think we'll see that revenue growth. So like I said, we probably wouldn't be asking the same question had FY 2019 had not happened the way it did. And that's just the reality. But the data continues to grow. And so that gives us the confidence in that.

And then sorry, the smaller parts of the portfolio as well are starting to shrink as also more volatile.

Speaker 13

And just a clarification Gianluca when you talk about a long term model is that a 3 year model and within that you expect to see years of decline and years of growth or Yes, it's linked to the period of and years of growth? Or Yes.

Speaker 10

It's linked to the period

Speaker 8

of time that we

Speaker 10

have discussed, so probably 3 to 5 years model. And it's always linked to our expectation on the success of our product roadmap. So that is a real driver for Zalready.

Speaker 4

Yes. I think Katie you also mentioned market share gains and there are some things inside the model that assume periodicities, cyclicality in the cloud, for example, because we've because those have been so magnified over the last couple of years, we're assuming that and that's why you may see if you new products. I think market share comes as a function of leadership and technology and customer service rather than anything else. So and we want to continue to drive that, but it's not a driver inside the models right now.

Speaker 14

Hi. Karl Ackerman from Cowen. Two questions, if I may. So Dave, I think your hard drive TAM outlook implies a fairly significant cannibalization of non enterprise hard drives in favor of NAND flash. So while the mix shift toward enterprise likely drives more favorable margins,

Speaker 1

do you think you

Speaker 14

have the right manufacturing and cost footprint in place as you contemplate for the reduction in consumer oriented drives? And as a follow-up, as you progress toward that 45% expected cost decline in a HAMR technology world, how do we think about the cost decline in the initial stage of the ramp on lower volumes?

Speaker 4

Thank you. Carl, so one way I think about it, John can help here a little bit, is that the technology that we were doing 4 or 5 years ago was in the factories and had its own factory footprint. As we pivot to the new technology, then we have to make more investment in heads of media less in drive types, drive numbers and drive types and things like that. The further change from here is actually fairly straightforward. But you can imagine that the capital equipment, the capital utilization for those old drive lines has now been repurposed.

And now we're talking a lot more about heads and media because the new drive types are so rich in heads and media. And so do I think that if the market extended a little bit further in some of those old markets, we can certainly satisfy that with the manufacturing technology we still have around. If it accelerated even further than what we forecast, and then it's a key point that we are forecasting it, right? That's what you noticed about the numbers. Then I think we'd have to deal with that, but we would end up repurposing into the growth as well.

Sorry, what was the second question you had there?

Speaker 14

The second question was, as we transition toward a HAMR based world, you mentioned 45% expected cost declines, I think, over a 5 year period. At least in the initial year or 2 stage as we transition to HAMR, how do you think about that cost decline on

Speaker 4

until yields improve? Yes. Okay. John, do you want to give

Speaker 1

us a yes?

Speaker 6

Yes. So as typically, as we launch new technology, as we generate volumes in the millions of unit ranges, we are able to fully amortize our cost associated with the new technology. And as we showed earlier, the volume of head media associated with the exabytes we'll be supplying with HAMR are easily going to drive those volume ranges. And so over the life of the program, we do expect to be able to amortize the cost associated with the new technology.

Speaker 4

The incremental cost added for HAMR is small compared to the cost per terabyte declines that we'd be able to provide the market, I think.

Speaker 15

Mona Grable with TCW. The question we only heard once the word flash during the presentation, which is very significant for me. Why do you think the edge will be more towards hard drives rather than flash? It seems like it makes more sense that the edge is flash.

Speaker 4

Sorry, let's clarify. There will be a lot of flash in the world. It will be at the endpoints, it will be at the edge, and it will be at the data centers as we talked about it. There will be hard drives on the edge as well, not for all applications,

Speaker 8

but there will be hard drives on the edge

Speaker 4

for those big data applications. So, little too much flash versus HDD. It's not necessarily our narrative, it's maybe other people's narratives because they want a piece of the HDD space. But the world is actually settled out in understanding the value proposition much better. There are points where it makes the right sense architecturally to use flash only.

It makes sense. There are other places where to scale that flash only solution becomes very expensive. It doesn't make sense. I think it's not a simple world. And by the way, to the discussion that I had with Robbie, there are CIOs and people like this, architects that make that do experiments.

They try things. They're trying certain architectures. They might buy some marketing out of 1 person. And then they realize as they grow the data lifecycle that it doesn't apply. So there will be a lot of edge flash for certain because edge compute will be memory centric and very fast.

Does that make sense? Yes, I would say I'd say it a little bit differently. I mean, we have a strong brand of knowledge about what the solutions are, so we can apply that. There are dynamics in the flash business that would say, do you want to go all in? We have a supply agreement.

We actually grew revenue last year, so we talked about it in flash products. So from my perspective, we can do that. It's just the economics isn't the right thing to do. There are other people who have a lot more flash that they have to go manage and that's their only business and so they talk about that. But I don't want to turn this into a flash versus HDD thing because that's not the way we think about it all.

There will be a lot of flash. Ravi, do you make decisions based on flash versus HDD?

Speaker 1

It's an interesting question. There are very few times that we when we talk to the infrastructure folks or the CIOs to discuss storage, do we say is A better than B? It's never really HDD is better than Flash or Flash better than HDD. It is the balance of the different technologies. In fact, the storage solutions that work best for us, we have come to the conclusion is where all three flash, DRAM and HDDs work together in a hybrid environment that gives the best TCO.

In fact, some of the storage solutions we are now deploying for our factories has a flat layer of DRAM, a thin layer of flash and a sea of HDDs, and they give us very close to all flash array performance at a much lower cost.

Speaker 16

Thanks. Yes. Thanks for all the detail. It's Aaron Rakers at Wells Fargo. I hate to do this, but I know that the picture here today is longer term in nature, but I just have to ask the question of how would you characterize the current demand environment?

And I think on top of that, I've gotten a lot of questions around the 14 terabyte to 16 terabyte transition in nearline versus your recent competitor introducing an 18 terabyte kind of similar 9 platter stack platform. How do you see that competitive landscape playing out maybe over the near term and your positioning to kind of recapture share in your lines over the next quarter or 2?

Speaker 4

Yes, I think, Aaron, we won't talk about the near term today. But on the next in the next earnings cycle, we'll make those kind of comments. So stay tuned for that.

Speaker 16

Maybe I'll follow-up then real quick. So can you talk a little bit about how you see HAMR factoring in as far as kind of the positioning of that product relative to, I'm assuming, the longer tail of what you do today in terms of the 'fourteen, 'sixteen, 'eighteen and 'twenty. How do we think about the ramp of kind of mass volume production there as you've introduced in 2020? Does that really get to scale in mid-twenty 21 or is it even later out?

Speaker 4

Yes, I think that's actually an interesting point and it goes way back in history. If you think about 3 terabytes to 4 terabytes to 6 terabytes almost every time that we were changing the platform, changing the mechanics, reacting to something new. And for us, the 16 platform is very leverageable. So we've talked publicly about HAMR being a drop in replacement into that platform and we've architected it intentionally that way. How that matures over time over the many years like John talked about and I'll let John opine here too remains to be seen, but can we take it all the way to 30 or something?

But we're platforms are very important to us. The industry at the top of this S curve, Perpendicular Magnetic, has been changing the platform a lot, and HAMR allows us to leverage the platform quite as.

Speaker 6

Go ahead. Yes, maybe I'll just say the equation that large data centers optimize against is total cost of ownership. And when you have a lower dollar per terabyte drive available, there's a fairly rapid adoption of that drive if the supply is in place. And so the expectation is that the as we deploy HAMR in our future platforms in that 20 plus terabyte regime that there will be a rapid adoption of that platform because it will deliver the lowest total cost of ownership in

Speaker 4

the market. I think the platform leverage is important from one other aspect as well. We talk all the time about the highest capacity drive, but there's a lot of data centers that cannot tolerate the highest capacity drive for various reasons. Their infrastructure isn't ready for it, their software isn't ready for it. So there are a lot of 12 terabytes and 16 terabytes and 8 terabytes especially on the edge.

You get to the edge there's cost sensitive, price sensitive points out there. That's why we believe this platform architecture will serve us very well because we'll be able to address with higher aerial density, we'll be able to address those capacity points in a more efficient way.

Speaker 17

Mark Delaney with Goldman Sachs. Thank you very much for putting the presentation together and hosting day. I have a question on the target financial model. If I look at the EBIT margins from the company in fiscal 2017, 2018 2019, they range between 13.8% 16.5%. That was pre the change in depreciation, that includes stock comp, each of which I think gives you about 100 basis points in the future off of those historical levels.

If I understood the new target model is 13% to 16% operating margin targets going forward. So it seems like there's some headwinds that you're baking into the operating margins, potentially some investments or mix shifts. Can you just help us better understand what some of those margin differentials are that we need to be cognizant of in the future? Thanks.

Speaker 10

Yes. I would not characterize that as headwind. I'll say there are assumptions in term of our cost, our volume and industry pricing. So of course, we can be more or less conservative on those assumptions. But we think that is a good model and hopefully it's helping you in build your own model.

The adjustment that you are talking about, especially the one on depreciation, is actually not an accounting change. It's an alignment of the cost to how we use the asset. So if in the past, we were using the asset for 5 years and now we use for 7 years, it's actually a change in the cost. It's not a change in accounting. So I think that is an important clarification for everyone.

Speaker 4

I think the operating income range, though, to your point, is wide enough to allow us the latitude that we need. And obviously, if we see upside opportunities, we may adjust over time. I think it's a good way to still continue to model the

Speaker 18

Hi, Steve Fox with Cross Research. So, two questions, please. So, first of all, at the core of the thesis seems to be that 90% of the data being created is in your sweet spot. Can you just sort of foot that with the fact that you also see more data being created that's random, which doesn't comes out of the sweet spot, especially as IoT progresses? So how do you maintain sort of that sort of big chunk being HDDs as more of the data becomes random?

Speaker 4

It's random and unstructured. Yes, I'll let John answer. But yes, it is random. It's not necessarily sequential like streaming video necessarily, although some of it is, but it's unstructured as well, which means that the architecture still has to be able to address it quickly, in order to process it well. But I think that that's where he got to on the performance part of the mass capacity tier is still really important.

Speaker 6

Yes. So it is true. I mean, even today in a large data center, the majority of IO is still random IO. But what happens is the data is tiered. If you think about data and a common metric is, if you look at a large data lake, many petabytes to exabytes of data in a consolidated data center, IO activity tends to be concentrated in a fairly small percentage of that total aggregate data lake.

And this is where the tiered this tiered architecture, how it delivers value is, there are algorithms that predict what data is going to be accessed over time and data that the algorithms believe will be accessed is promoted into a lower latency tier, a flash or DRAM tier. And all of the rest of the data is in this hard disk architecture because it has the right amount of performance to feed into that lower latency data tier. And our expectation is that architecture, because it's efficient and it takes advantage of the ability to model and predict what types of data are going to be used, it will continue to be the preferred architecture moving forward to manage total cost.

Speaker 18

That's helpful. Thank you. And then just on the revenue outlook, the longer term revenue outlook, what do you anticipate sort of the cyclicality from cloud data centers looking like? Is it similar to what we've seen in the last couple of years? And also the non HDD business SSDs and systems, is that contributing to the growth or not?

Speaker 4

Yes, there's still growth in revenue in both those other businesses. I'll talk about systems real quickly because we didn't talk about that before. That's actually as much about revenue as it is about go to market or time to market. Our ability to today we have to solve so many different problems with qualifications and good complexity of people waiting for them to integrate in order to launch a 16 terabyte drive for example. If we can just do the integration for a lot of people and then leverage, we get relatively faster time to market.

So it's not just about revenue. From my perspective, revenue growth in the future is largely driven by the diversity of the cloud, not just by 1 or 2 cloud service providers. And I think I made this mistake last time in the cycle, but I do think it's generally true that the more diverse customer sets that we're seeing as the cloud public cloud and private cloud continues to expand, then I think it should smooth over some of the cyclicality that you referenced. Didn't happen in FY 2019 like that though.

Speaker 5

Thank you. C. J. Muse with Evercore ISI. I guess two questions.

One to follow-up on the first one by Mark. Essentially, with removal of stock based comp and extending the lifetime, you're guiding to a midpoint operating margins, roughly 150 basis points below where you were in your prior target. And so from your prior response, what I heard was it's just conservatism. Is that what you're saying or are there other factors that we should be thinking about?

Speaker 4

I would say that as we're modeling the ranges for the business, this is the way we plan the business going forward. Tactically, whatever is going on, we react tactically to the local environment. But I think that's a good way to long term plan the business continue, if that makes sense.

Speaker 5

And then as a follow-up, clearly capital returns, free cash flow is a very important part of what you're telling us today. And I guess on that front, can you share with us what your free cash flow margin targets are going forward? It looks like, well, last 2 years around 12%, prior years, 15%, 16%, 17%. Based on the current math, it looks like it's 8%, 9%, 10%. Is that kind of how you should be thinking about things?

Speaker 4

Well, I think now you're inferring revenue guidance and things like that that we're really not going to do today, but go ahead.

Speaker 10

Yes. We don't guide on free cash flow, mainly because the CapEx timing can be difficult to fully predict. What I said before is that already for this fiscal year, we still expect a very strong free cash flow generation. Then we need to ramp EMEA. And so we'll see how much CapEx we need to spend for this important drive for us.

And we will always keep a very good look to improve our free cash flow.

Speaker 4

Yes. We think there's an opportunity by raising CapEx this year on free cash flow. But again, you get further out in time, then we're not going to guide that right now.

Speaker 5

And could I just clarify one point? On the EPS guide for fiscal 'twenty higher than fiscal 'nineteen, are you comparing apples to apples excluding stock based comp? Or are you including and then excluding? Yes.

Speaker 10

We are comparing what we will report.

Speaker 4

What you report?

Speaker 14

Yes. Great. Thank you.

Speaker 11

Thank you. It's Jim Suva here from Citigroup. You were pretty clear about your forecast on sales. I think there's some confusion in the room about the useful life change for the accounting. Specifically, when it comes to your CapEx, I believe you said long term it will be 6% to 8% of revenues, but I believe you're also adjusting your assumption for the useful life of your equipment to be longer.

So why wouldn't your actual percent of CapEx need to be less than 6% to 8%, which looks like it's actually increasing and your operating margins look like they're being helped by about $100,000,000 next year versus this year for the accounting assumption change. So are you having to invest more with HAMR?

Speaker 10

Yes. That's the

Speaker 4

answer to the question. So the CapEx is going up. And actually, if you caught my earlier comments, it's actually being repurposed from manufacturing sites consolidation to investment in Heads of Media Technology.

Speaker 19

Thank you so much. Ivan Feinseth, Tigris Financial Partners. Thank you for taking my question. Can you go into a little bit more detail on your edge in streaming video? And there's going to be within the next 6 months 5 major video streaming services, Netflix Now, Disney Plus is coming, which I think will exceed Netflix, 160,000,000 subscribers.

By the way, each of those services allows 5 people to be part of the subscriber. So there potentially is more than 160,000,000 people that could watch Netflix. So there's probably going to be at any one time a 1000000000 people around the world that are potentially viewing what is going to be a massive amount of content.

Speaker 4

So how do you focus on that opportunity? And what is your edge in that opportunity? So we are talking to CDNs. I think That word is becoming out of vogue now, but we are talking to people. Their architectures are changing as well.

It's kind of interesting because if everyone's watching the same video, same cat video or something like that, the amount of storage that's required is not very much. There's a lot of requirements on networks and compute and other aspects. But the diversity of data, which is growing as well, is very interesting and being able to access data that may not be the what was just created, but it was has been created a long time ago. And how that's changing is really interesting. And Johnson is right in the middle of a lot of this

Speaker 6

the edge networks to support content distribution and trying to help them architect a lower cost solution for the kind of cash they need at the various points of presence that they have around the edge to service the end customer. And many of the variables are similar to what a large cloud data center pays attention to. You have to have cash in the form of flash in most cases in these edge networks, but you also need to have a bulk storage to make sure that you have all of the content available and then to serve demand spikes and everything, you cache content in your flash layers. So, architectures are not dissimilar from a cloud architecture, and our expectation is that the same types of technologies that we're introducing that we talked about today are super relevant to help address the problems that content distribution networks are going to face as the user base grows.

Speaker 19

But for example, when there's a popular show, there's going

Speaker 8

to be

Speaker 19

more iterations of that one show. So it's not like the popularity of the show will probably more need for the storage of that one show as it goes down in popularity and other things. So it's also going to be changing.

Speaker 4

Yes. It's with another interesting point is it's not just on the consumption side that media and entertainment is driving a lot of requirements for technology, but on the creation side as well, which is fascinating. There's a lot of data that moves around in a very secure way, large data because the cameras have very high resolution just for the quality of the entertainment as well. So there's creation and content or consumption vectors that are really interesting for

Speaker 19

us. Part of the appeal or customer win will not only be for these shipping services, the content they have, but the service they could provide as long as that you don't keep timing out, that they could lose customers if they can't meet the speed.

Speaker 4

That's right. Yes.

Speaker 19

So one other question, could you go into a little detail about the technology of the dual actuator? And also you don't really you didn't really talk much about SSDs or hybrid drives today. Where do you see opportunities or the market needs in those things?

Speaker 4

Yes, I'll let John talk about dual actuator on SSDs and hybrid drive. I mean hybrid drive has been around for a long time. I think our total shipments is over $35,000,000 We continue to ship hybrid drives for the room. Hybrid drive is where it has flash on the board because the requirements of that particular system are a fast response on a few different things and a stream back in on more data that's coming. And actually it's a microcosm of flash or of a data center where flash is used on the front as well.

SSDs, we didn't talk about it very much. We do have an SSD portfolio. We talked about only really one of our products, which is the highest capacity product, but our SSD SSD portfolio is still serving customers. We had customer wins year over year. It's we don't control the raw media there.

So we have to we understand the customers and their installed base, and we can still have some value there. But we have to be very, very careful in that market, think. Yes.

Speaker 6

So dual actuators, pretty straightforward to explain. A normal single actuator drive, there's one actuator you can read and write from a single only one head at any point in time. There's a single electronics path through the drive, single preamp, single channel, single controller. In a dual actuator drive, we have 2 coaxial actuators, which are fully independent, and there's fully independent electronics supporting those 2 actuators. And so because of that, you can simultaneously read and write on any on 2 heads at the same time, one head per actuator.

So, the drive is kind of logically presents itself to the system it's installed in as it looks logically like 2 disk drives. And that's how the system interacts with the drive. It thinks it has 2 disk drives in there, each at half the capacity of the total drive. So it essentially looks like 2 spindles to the system it's plugged

Speaker 4

into. It's not easy because the in the hundreds of millions of slots out there, they can only tolerate a certain performance envelope, a certain power footprint. Power is very important in 2 actuators when we started developing 2 electronics chains to the write and read circuitry. So it's not as simple as just mechanically making 2 actuators. So there's a lot of complexity that goes into it.

And then of course, the customer has to respond to that as well.

Speaker 20

Thank you, Casimir Solis, SIG. I have two follow-up questions. Jean Luc, going back to your revenue guide of 2% to 6%, can you help me understand what is your today's mix of mission critical? And how does that mix changes over the next 3 to 5 years as you hit your revenue target of 2% to 6%? And I have a follow-up.

Speaker 10

Yes. The mission critical volume is not declining too much. But the mission critical as a percentage of the total volume because the mass capacity storage is increasing so much is obviously declining in percentage.

Speaker 20

So you're assuming that the 10 ks RPM will remain kind of flattish?

Speaker 10

Not as a percentage of total, but as a pure volume, it will be fairly stable with what we are today for the next few years.

Speaker 4

So Mehdi, what's happened in the last couple of years is that a lot of the lowest capacity mission critical drives have gone away. So 300 good boot drives, if you will, 300 gigabytes, 600, whether it's 10 ks or 15 ks. And we've mixed to the last platform, which we did, which was 2.4 terabytes on the 10 ks. We think there's a good value proposition not only it's not just drives versus flash in this arrangement, which a lot of people think about, because sometimes the system might have 12 drives in it like this and their stream their stride to say Hadoop workloads are perfect examples of that where multiple actuators is actually benefiting from a performance standpoint and there may be flash nearby anyway. So architecturally, we think there's going to be a long tail to this and SaaS drives that like I said, there's a huge install base of the 2.5 inches footprint.

We said from an exabyte perspective, it becomes small like Gianluca did and said. And then by 2025, that's about 10% in the numbers that we showed of the exabyte TAM, not just mission critical, but the entire client portfolio or the client server portfolio.

Speaker 20

And then in that context, if I just focus on the growth area, call it storage, what is the scenario where you actually back up on the network rather than moving the data to the cold storage? Is that baked into your

Speaker 4

Sorry, I didn't catch it last part, the backup. What is

Speaker 20

have you contemplated on a scenario where some of the backup is done on the network and like a hybrid drive rather than moving the data into the cold storage?

Speaker 4

Yes, there are many different architectures that, for example, are being pioneered by cloud service providers to address issues like this because maybe to our point the whole day, the tiers are not simple.

Speaker 8

They get very complicated. Even towards

Speaker 4

the colder end of the tiering gets fairly line, not offline, completely offline. So there's no chasm in some workloads between 100 milliseconds and a week, right? So there is all kinds of different applications that may need 100 milliseconds, then one second, then 2 seconds, then to your point.

Speaker 1

So John, you want to Yes.

Speaker 6

I mean, there's an enormous attention in the industry right now on the usability of cold storage, so what typically would be in like a tape system. And improving accessibility and usability of that category of storage. And so we do have quite a bit of focus on understanding that use case and to what degree can we architect hard disk based solutions that address those accessibility, usability objectives that people have that they don't presently have with their tape system. So, there's likely opportunity there, but it's an area that we're continuing to look at carefully to see to how we can help solve problems in that area.

Speaker 4

So I think Shane is telling me 2 more questions. Okay, so

Speaker 21

let's go. Thank you. Anand Srinivasan, Bloomberg Intelligence. Quick question on pricing dynamics as the end market becomes a little bit more concentrated along the HAMMER gives you these cost benefits of 44 HAMMER gives you these cost benefits of 45% expanding. You now have a concentrated customer base and they're buying higher capacity points, and you're seeing cheaper drives.

Can you talk a little bit about pricing dynamics there? Won't they want to exact more of a price a better pricing from you on a per terabyte or per exabyte basis relative to previous capacity points? And then the other is second question is, is the mix assumption of HDD SSD seems to be you're assuming sort of a stable mix, ninety-ten. Why sort of some thought points as to why that assumption should or shouldn't change?

Speaker 4

Yes, I can let John answer that second point. I think on the first point, it's an interesting and evolving landscape. Once upon a time, cloud service providers bought from our OEMs. So it was a different environment altogether. And at that point in time, maybe the OEMs wouldn't even want us talking to the cloud service providers.

We matured through that period. That's why when we talk about IT 4.0, we see other people starting to mature into that environment as well. I think the diversity I made reference to this earlier, the diversity of the market at that point has become a point where and supply and demand balance have become a point where people are having much more mature conversations to your point. People want lowest cost per terabyte and we know that was that's what we go drive intentionally with technology and other things, so that we can sell that get that product out in the market. I do think that there will be a balance when supply and demand become issues like we talked about and when some of the customer diversity requires longer lead time, because you'll have customers who say, I'm building a data center along this time line.

I need to make sure that, that product is there when I need it. Big scale people may have more flexibility than that or grown up with more flexibility than against the supply demand environment that we had, it will change.

Speaker 21

So we're not specifically we're not looking for steeper declines in the HAMR world from a price per terabyte standpoint or terabyte standpoint?

Speaker 4

I don't think that's a good way to think about it. Yes, yes. I think I mean, obviously, everybody would like that from a from customer perspective, but I don't think that's a good way to think about it.

Speaker 6

Yes. So on the second part of your question, the in general, like in a large data center, there's the user experience is governed by latency. How long does it take for my request to go through? And that user experience is typically fulfilled with the low latency layer in the storage of flash and DRAM. But the bulk storage tier is almost exclusively governed decision making is governed by dollar per terabyte, where incremental performance, even substantial incremental performance, doesn't change the value proposition for that storage layer.

And it's almost completely governed by the cost of the storage, dollar per terabyte and to a degree also the power. So, why we think the ratio is going to stay roughly the same is that we don't see any significant change in the relative price and the relative cost per bit of flash and the relative cost per bit of disk. And because of that, the decisions are going to migrate towards the lowest cost per bit architecture, which remains disk. So that is the central argument that we have around why that ratio is going to stay roughly the same as it is today because the cost per bit relative dynamics are staying the same going forward as they are today.

Speaker 22

Thanks. Nehal Chokshi from Maxim Group. John, nice presentation, especially the Mach 2 stuff. What are the challenges associated with scaling the number of independent actuators with the number of disc over time?

Speaker 6

So, the basic challenge is the form factor constraints. So, what can we fit inside the form factor? We actually have designs and architectures that let us continue to scale. And as adoption of our dual actuator progresses over the next several years, we'll learn more about what direction we want to take the designs that we have, and we have several different ways to introduce parallelism from either additional actuators or additional heads that can read and write. And we'll in working with our customers, partners and in the ecosystem, start focusing in on what the next generation is going to look like.

But we do expect to see quite a bit of traction on a dual actuator configuration up to the, let's say, 30 terabyte, 40 terabyte range, at which point we expect that additional performance innovation would be warranted, and we do have the ability to add additional actuators as well as activate additional heads.

Speaker 22

What is that was a really nice chart on the optimal versus minimal IOPS per terabyte. What are those actual absolute levels?

Speaker 6

They're in the 5 to 10 range. 5 to 10 IOPS per terabyte is and I would say they're a rule of thumb. I wouldn't categorize it as like a concrete metric that applies in general across the industry, but they're pretty good rules of thumb based on how the architecture is going to be called.

Speaker 22

That 5% to 10% as the optimal or minimal?

Speaker 6

5% on the minimum, 10% on the optimal.

Speaker 22

Got it. Okay. And then, Gianluca, why is useful life being extended from 3 to 5 years to 3 to 7 years?

Speaker 10

It's the utilization of our asset. So as you know, every actually every quarter, we do this analysis and see how we are utilizing our asset and what is the estimate for the future. We expect also with AMR to be able to use the asset for a little bit longer. So we are starting a new technology that will last for several years. And this is the base of our decision.

Speaker 22

And historically it has been in the 3 to 5 year range or has it actually been the 3 to 7

Speaker 10

year range? No, in the past

Speaker 4

was correct. The capital profile has changed as well as when we've taken the manufacturing and print down. So that continued assessment. I'd like to thank everybody for their participation today. We're going to cut the webcast, but before we do, I'd also like to thank our employees, our suppliers, our customers and all of our other partners.

And for those of you who are here, please join us out in the lobby. Thanks very much.

Speaker 10

Thank you.

Powered by