Good morning. Welcome to Intel's autonomous driving day and a warm welcome to those of you joining us via webcast today. We're very excited to have you here. Just a few logistical details before we get started. If I can ask you to please turn your phones to silent, that will ensure that we're able to listen to the presenters.
And also as a reminder, we have a guest Wi Fi set up here called IDF-sixteen, and it does not require a password. With that out of the way, I'd like to read the risk factors and we'll move on to the agenda. Okay. So today's presentations contain forward looking statements. All statements made that are not historical facts are subject to a number of risks and uncertainties, and actual results may differ materially.
Please refer to our most recent earnings release, 10 Q and 10 ks for more information on the risk factors that could cause actual results to differ. If we use any non GAAP financial measures during the presentation, you will find on our website, intc.com, the required reconciliation to the most directly comparable GAAP financial measure. A real quick recap of our agenda. So Doug will first kick off with a strategic overview and big picture. He'll then be followed by Diane, who will offer some insights on artificial intelligence.
She'll be followed then by Asha, who will talk about 5 gs and the connected car. Asha will then invite Doug to talk about end to end software value. And then we'll invite Doug back on stage to wrap it up. With that, let's get started.
Autonomous is not a product. It's not a technology. It's a convergence of innovations, a vision, of partners, of expertise. And it starts with Intel's end to end suite of capabilities. We are focused on driving the autonomous revolution.
Intel has a vision. It is founded on the fact that only Intel offers secure data center technologies, a smart interface, end to end intelligence across the network, and maximum compute performance. Our trusted partnerships are just as critical as our technology. Together, we have built a world class team of pioneering engineers, researchers and thought leaders. At Intel, we are committed to powering the promise of autonomous driving and the future of transportation.
Ladies and gentlemen, please welcome Doug Davis.
Good morning. Welcome, and thanks for being here with us this morning. I was putting together my thoughts for this just a few weeks ago, kind of getting started. I was I'd come to this point of creating this really profound statement about how autonomous driving was really going to transform the world. And it turns out that's a phrase we're hearing almost every day in the press, right?
We're hearing that autonomous vehicles will revolutionize the way that we live and the way that we work. But then realize, of course, that we've been saying the same thing about the Internet of Things and how the Internet of Things will transform the way that we live and work over the coming decade. We've been saying that we're going to go from connecting the unconnected to building smart and connected things and then ultimately a phase where we will have software defined autonomous things. So of course, cars are the best example of that 3rd phase. We're going to profoundly change the way that we use cars as a means of transportation over the next 10 years.
But I
also want to remind you, we're also going to see buses and trucks and ships. We're going to see autonomous robots within factories. We're even going to see autonomous functions happening in areas like retail stores. So that 3rd phase is a whole range of things that will become autonomous and need solutions to support them. But in order for this to happen, of course, the amount and complexity of computing that's needed to support those kinds of autonomous functions will go up exponentially in order to deliver these end to end connected systems.
And then today, we'll help you understand, we'll guide you through kind of our thinking and how we are the best company with the capabilities to deliver these kinds of end to end solutions. But first, I thought I'd spend a few minutes just talking about the profound impacts. I said that at the beginning. So let's look at how autonomous vehicles will affect the world that we live in. There was a recent Morgan Stanley report that predicts $1,300,000,000,000 yes, with a T in annual savings in the U.
S. Alone from the implementation of fully autonomous vehicles. Now that number on a global basis is more like $5,600,000,000,000
And for the U. S, that
would equate to about an 8% impact to annual GDP. And you can see there are a number of contributors to that number, right, about $500,000,000,000 just from productivity gains. That's a big number. But if you look at it, Americans spend about 75,000,000,000 hours a year just driving. That averages out to about 1.5 hours per day per person.
Some of you may spend a lot more time doing that commute on a daily basis. But now think about an autonomous vehicle, right? I can do phone calls, I can do e mail, I can catch up on binge watching that series I'm trying to keep up with, right? Maybe watch a movie, take a nap, get some sleep, right? So big impact from productivity gains, dollars 488,000,000,000 impact from fewer accidents.
There are more than 30,000 people who die in automobile accidents in the U. S. On an annual basis. And more than 90% of car accidents are caused by human error. So those costs include things like property damage or lost household production, lost earnings, but of course, it also includes all the medical expenses, emergency services that result from these kinds of accidents.
We kind of look through a few of the others as well. If you see, there will be about $150,000,000,000 from fuel savings. Part of it is we know if we use cruise control and we drive in a very smooth manner, right, we get better fuel economy. But just think, we can improve the aerodynamic efficiency of the car because we can eliminate those mirrors. We won't need those anymore.
$138,000,000,000 through reduced congestion. Autonomous cars can flow more smoothly through urban environments. And cities can use smart city infrastructure to improve the efficiency of intersections and traffic flows. All of these are obviously significant financial implications, but I want to make sure that we're not also not minimizing the impact that autonomous vehicles can have just by saving human lives by eliminating accidents. So if we think about how we interact with cars, right, that really hasn't changed in over 100 years, right?
A driver gets behind the wheel, he or she controls the car, gets from point A to point B on the destination. And of course, there are other passengers riding along, there are bossy people in the back seat helping you navigate, right? If we think about the future of autonomous vehicles, we're all just going to be riding and nobody is going to need to help the driver navigate. Taxis are going to make a major shift, right? They're going to move to fleets of autonomous vehicles, again eliminating the driver.
And this is where I think we'll see the 1st real widespread use of these kind of vehicles. In fact, a study by Roland Berger predicts that 30% of taxi fleets will be fully autonomous by the year 2,030. And at that point, only 45% of the total miles driven will be by people and private cars. And it's strictly passengers now, right? We're just along for the ride.
What are we going to do with that time, right? I talked a little bit about kind of catching up on things. But as we think about those usage models, reading e mail, maybe doing a little shopping, watching a movie, we're going to be like plane passengers, right? But we also will have the opportunity in these autonomous vehicles to have stunning bandwidth delivered to the car. And that same Morgan Stanley report predicts about a $5,000,000,000 new net media revenue opportunity will come from autonomous vehicles.
So before we kind of start digging into details, I wanted to align on some terminology around levels of autonomy. There are 2 common standards out there, 1 from the Society of Automotive Engineers, 1 from the National Highway Traffic Safety Administration. So the purposes of today, we're going to use SAE as a model we're going to refer to. So we'll talk about L2, L3, L4, L5. And if we kind of click through them quickly, L2 would represent partial automation.
And these are examples where maybe cruise control is working in combination with lane centering or lane departure warning. L3 is a conditional automation. This is where we have driver assist functions that allow for hands off and eyes off, but the driver still has to be able to take control at a moment's notice. L4 then becomes high automation. The vehicle is designed to perform safely in all critical driving kind of functions and monitoring roadway conditions for an entire trip, but a driver would still be in the vehicle.
Level 5 is full automation, right? You can turn your seat around, play cards with the other passengers, take a nap, whatever you want to do in the vehicle. So that will kind of ground us a bit. We've also gotten a lot of questions about, hey, so what is the size of this market for autonomous vehicles? So we went off and we looked at historical growth rates of automobiles, right?
And we've projected there will be about 110,000,000 cars sold in the year 2025. However, there are also a lot of discussions that are suggesting that with the increase in fleet utilization and ride sharing, private car ownership could go down, right? And so the other case that you see on the slide here could be more like 105,000,000 cars sold in the year 2025. So as I was thinking a little bit about this, I thought I would kind of ask all you the question. How many of you have been using Uber exclusively while you've been in San Francisco for IDF?
Show of hands, right? So now think about that. If you lived in San Francisco, do you need to own a car, right? So I think that variability is something to really take into consideration. However, as we thought about this, we really think there are other dynamics that are happening that will be much more interesting when we think about the size of this market.
So we looked at 2016 today and the vehicles that are being sold and the amount of functionality that's compute performance, anywhere from half to even as much as 10 teraflops when you add together all that computing. We're driving millions of pixels. We have gigabytes of embedded storage. The network within the car is driving tens of megabits per second of traffic. And from a safety standpoint, they operate in a mode kind of termed fail safe.
So in other words, some subsystem may fail, the driver gets a warning and immediately has to deal with whatever that system is, but we know what's happening there. And many cars today have over 100 or even more than 150 electronic control units delivering all of these functions that exist within the car. So again, let's project forward to 2025 now. The raw compute needed will increase by about 10x, but the number of pixels that we think we need to be able to drive within the car will increase by about 1,000x. We're going to have more and more screens to be able to interact with.
And the amount of storage necessary will increase correspondingly. And the bandwidth within the car will need to increase as well. And that bandwidth is delivered via wiring harness. If you go pull the skins off this in these cars, you're going to see wires everywhere. Turns out that's the 3rd heaviest and third costliest component within a car and consumes about 50% of the labor cost of putting a car together.
So think about that as well and the opportunities maybe in the future with optical technologies or even wireless connectivity to handle that increase we as these compute workloads begin to consolidate and are virtualized. And of course, safety here will transition as well. We're going to move into a definition called fail operational. And this is implementing the necessary level of technologies and redundancies destination we're headed to with the built in capabilities that we'll have. And all of this is going to happen even as that overall compute workload increases by an order of magnitude.
So, of course, as we think about L4 and L5 Autonomous Vehicles, they need eyes and ears just like us human drivers need, right? Those are probably the 2 most important senses that we need when we're out driving around. And you see here by this graphic that there will be a dramatic increase in the amount of sensor data that needs to be generated, fused and then consumed by these vehicles. You can also see that the corresponding number of things that the car needs to be able to see and deal with will vary based on the conditions, right? If you're out on the highway, you're going to have other cars and trucks and motorcycles, lane dividers, some signs, right?
But there are limited number of classes of things that we need to deal with, albeit at the same time, the vehicle speeds are going to be relatively high. If you then take that exit ramp, right, and you go into more of an urban environment, the classes and number of things will increase dramatically, right? Not only do we have all those things that we were dealing with on the highway, we've got pedestrians and bicyclists, cats and dogs, right? We've got all these other classes of things now to worry about, but the speed at which we have to deal with them drops because we're going to be traveling at a lower rate, right? The systems that get implemented need to be adept at doing both and managing all of this data and all the environmental factors.
So if you remember, we talked about 110,000,000 cars shipping on an annual basis, and I thought then we will take all of those facts that I just went through and look at the amount of compute that will be needed in relation to the number of cars that are going to be shipping. The overall computing will increase much more dramatically, in fact, exponentially. Today, as I said, these newer cars can have, let's say, half a teraflop per car shipping today. So we multiply that by 80,000,000 cars. That's those are the estimates of the number of cars shipping in 2016.
And you can see that, that amounts to about 40,000,000 teraflops that get shipped in the skin of cars that are going out every day. By 2025, we're conservatively assuming kind of a mix of L2, L3, L4, L5 vehicles and calculating the amount of compute those will each need. And cumulatively, that results to about 300,000,000 teraflops that will be put on the road in that year. And we're well poised to be able to support computing workloads that are growing exponentially, thanks to Moore's Law. So if you remember from BK's keynote, right, he talked about a wide number of Internet of Things workloads that are generating vast amounts of data that needs to be analyzed.
And of course, I pulled out my favorite one. Every self driving car will generate about 4,000 gigabytes per day. And of course, not all that data is going back to the data center cloud, but all of it will be assembled and assessed in order to determine how those cars are going to behave on the road at every second that they're driving down the road. So the real question becomes, what are you going to do with all that data? And of course, I think you're very familiar.
We've been describing what we call the virtuous cycle that defines how you can create systems to extract the value from all of the data being generated by these smart and connected things. Of course, in the cloud or data center, we're going to be feeding that data from the edge and we're going to be using critical technologies like memory and FPGA to create these systems. And of course, data analytics will be done not only in the cloud and the data center, but within the network, referred to as the fog, and at the edge in some of these things. But all of it is bound together by connectivity and all of it's enabled by Moore's Law, obviously delivered by Intel. So each of these elements are key for autonomous driving success, and no one though can deliver that kind of end to end architecture like Intel can.
So let's translate that virtuous cycle. I'm going to flip it over on its side and describe it in the context of an autonomous vehicle. So the car obviously is the thing at the edge. We need to do real time environmental modeling from all those sensors. We need to fuse that data and then determine the proper vehicle trajectory.
Some of that data will be scored and moved back to the data center and the data will be used then to train the model and that model can be updated and sent back to the cars continually to improve the capability of the vehicles that are on the road. And of course, high definition maps need to be delivered from the data center to the car to help continue to improve its capability to determine its trajectory. So we clearly show where Intel's assets fit in this kind of an end to end architecture, right? Starting on the left, we have tailored solutions for in vehicle computing kind of applications, CPUs, FPGAs, the memory and storage that we need, deep learning scoring models from a software perspective. And then as mobile data traffic surges, these connected vehicles will be amongst billions of things that are looking for network bandwidth to be able to fulfill their requirements.
So at the network level, Intel's leadership in accelerating 5 gs is critical. And this is really important because 5 gs is the only network technology capable of delivering a latency of a millisecond or less with speeds that can peak at 10 gigabits per second. And you'll hear a bit more from Asha in a few minutes. But none of this is possible if data can't be sufficiently stored, shared, protected. And so in the data center, we've tailored solutions for the automotive kinds of applications.
Of course, we have Xeon, Xeon PHY, Xeon plus FPGA, machine and deep learning algorithms, and of course, you'll hear more from Diane as well. We have a broad and deep set of core competencies in our software portfolio, again optimized for this Intel architecture implementation. And our Intel security assets span this end to end architecture. But I thought maybe I'd give you a little bit different perspective this morning and let you hear from an automotive OEM who can share a little bit with you how they're thinking about automotive vehicles in these autonomous applications. Yesterday, you heard Baidu's Senior VP, Mr.
Wang, talk with Diane about Baidu's vision on artificial intelligence and enabling the workload needs on Xeon and Xeon 5 Processors. Well, today, I'd like to invite Mr. Wang to come out and share his views on autonomous driving and how Baidu and Intel are working together. So join me in welcoming Mr. Wang.
Good morning. Hi. Thanks for being here. Good to see you again. So tell us a little bit about what's going on.
Yes. I'm very glad to share with you Baidu's vision for automotive driving and also why Intel and Baidu are working together. And we each have a long term of our partnership working together and we each have a unique strength in this area, especially Intel has good end to end onboard computing system that we can work together to build and also Baidu has strength in artificial intelligence and also the autonomous driving technologies. By working together, we can develop the onboard computing platform needed to deliver autonomous driving vehicles. Baidu's vision is that within 3 years, we can start commercially operation of autonomous driving vehicle in a small volume within 5 years we can start mass production of autonomous driving vehicles.
And also we believe that within 10 to 15 years most of the newly produced car will be autonomous driving and of course it can be with new energy and also will be ride sharing. By these autonomous driving vehicles, we'll make the key contribution to realizing this vision.
So Mr. Wang, maybe talk a little bit about the decision to choose Intel to work with the technologies that you need in the car.
Yes. As mentioned that Intel and Baidu have been working together for a long time. More specifically, it's for the cloud computing technologies on data center. And Xeon Phi may be good for the upcoming data center technology. For onboard computing, the challenge is different and the requirement can be very different.
So, Intel and Baidu need to work together to build this new technology suitable for autonomous driving. Intel understands the computing hardware and Baidu is good at the software for autonomous driving and artificial intelligence. That's why we when we work together, we can deliver the system maybe setting for the future.
Great. So I really appreciate you coming out and sharing a little bit about what we're doing together. So thank you very much.
Thank you.
Now that you have kind of that environmental background, if you will, the foundation for these autonomous vehicles, We're really happy to be able to give you much deeper insight. And I have a set of the experts, if you will, from Intel to help us understand a little bit more. Diane Brandt is going to talk about artificial intelligence and machine learning in the data center or cloud to be able to support these kinds of autonomous vehicles. Asha Kedi will then explain how 5 gs is a different set of technologies that are necessary to support these functions and how Intel is leading the efforts to accelerate 5 gs implementation. Then Doug Fisher is going to get up and talk about the depth and breadth of Intel's software portfolio.
And then I'll get back up here at the end and kind of pull it all back together. But I want to kind of emphasize a quote from somebody else in the industry, Ricky Hooty, who is the Senior Vice President of Electrical and Electronic Systems at Audi. He recently said, it is now the time to make the step from traditional microprocessor and microcontroller architectures towards a more centralized computing architecture with strong ties to the cloud. So I think that's another good perspective of what's needed to deliver autonomous vehicles. So with that, I'm very excited to welcome the Executive Vice President of the Data Center Group on stage, Diane Bryant.
Thank you. Okay.
So, thanks a bunch for coming. We're so happy you guys are all here. And I'm going to do a couple of things. I'm going to level set on some terminology. Those of you that are quite savvy in the area of artificial intelligence, you might find this a little rudimentary, but we'll go through some terms because
there's a
lot of terms that get used a lot of different ways. And then I'll talk to you about why Intel leads. Okay. So to start, artificial intelligence, we recognize as this very hypercharged term, it's defined as computing systems that are capable of human like intelligent behavior. And AI is truly all around us even now with commonplace usages like talk to text or fraud detection or photo tagging.
And then artificial intelligence is also creating big cutting edge opportunities like autonomous vehicles. So artificial intelligence, we believe, is truly going to transform the way all businesses operate as well as the way people simply engage with the world. And I really do appreciate Andrew Ng's quote, the Andrew Ng being the Chief Data Scientist from Baidu, his observation in comparing artificial intelligence to the Industrial Revolution and saying that just as the Industrial Revolution relieved humanity and much of the physical drudgery, artificial intelligence will relieve us of much of the mental drudgery. So big, big aspirational goals. And there are though, given that it is there is a lot of hype, it is a lot of buzz around artificial intelligence, There's a clear question of, so why now?
We've been talking about artificial intelligence since the '80s, why now? And so there's 3 real reasons why now it is going to be real or it is real. The first is obviously hardware. And we'll take some credit for Moore's Law that if you didn't have Moore's Law and this ever increasing capacity and performance at ever lower cost, it would be simply prohibitive to store the massive amounts of data needed and compute on that data. The second is the data itself.
Massive, massive data sets now have been amassed to and thanks to the fact that there is pervasive connectivity and billions connected things today and more coming online, streaming data to the cloud for analysis. So you have now the data to deliver against artificial intelligence. And then the third is once you've cleared the hurdles of hardware and data, then there is this rapid level of innovation and investment in the algorithms around artificial intelligence. And you can see that. You see lots of increased investment in academia and the research side, and you see lots and lots of start up companies and obviously lots and lots of algorithms coming from all that.
So artificial intelligence, we believe, is going to drive the next is going to be the next wave of growth. It is going to drive growth in this industry. So then you say, what's the machine learning? So this is part of the tutorial. So machine learning is a class of algorithms that improve over time as more and more data is processed.
And that learning part is key, right? You have to keep applying more data so that the machine continues to learn and becomes more and more accurate. Deep learning is a tool to deliver artificial intelligence machine learning is a tool rather. Underneath machine learning is deep learning. So deep learning is a subset of machine learning.
And it's a class of modeling that's inspired by the way the human brain's neural network actually works. And we call it deep when there is more than one level of computation between the input and the output. And today, with deep learning, there is either 2, 4 or 8 levels of computation between the numbers of many inputs and the actual output. The math behind deep learning and machine learning is not new. It's not complex.
It is simple linear algebra. We all took it back in high school. What makes this most of us pass did you pass, Mark? Okay. What makes this different though is just the scale, just massive, massive, massive scale.
And the more data you apply the more data that you compute, the more accurate the model. And the more compute capacity that you apply to the problem, the faster that model trains, providing you results. So you'll hear all the time, time to train, time to train. The key is reduce the time to train. And you do that through throwing lots and lots of compute capacity at the problem.
So today, what you see here, excuse me, the machine learning market still rather nascent and the deep learning market is even more emergent. So only 7% of all servers shipped last year were running machine learning or deep learning workloads. That's the total. Of that 7, 0.1% were deep learning workloads. Okay.
So as I said, it's lots of research, there's lots of excitement in the deep learning place, but it is still very nascent. However, analytics is the fastest growing workload in the data center. By 2020, there'll be more servers running data analytics than any other workload. It is that was exciting. It is rapidly growing.
And so hence all the attention. And we certainly are investing in this area because the growth opportunity is obvious. It's huge. The good news is that machine learning and deep learning is a distributed, highly parallel, high performance workload by definition, which makes it ideal for Xeon and Xeon 5. So of the 7% of the servers that were running analytics from last year's total server base, 95% of them were good old fashioned 2 socket Xeon servers.
Xeon just that's all that's the compute engine is 2 sockets of Intel Xeon processors. About 2.5% were Xeon 2 socket servers with an add in card, a GPGPU add in card. And about 2.5% of it were not it was not Intel architecture, either Spark or Power. So Xeon and Xeon PHY, it's people to think otherwise, but we are clearly leading in this space. Xeon and Xeon Phi are inherently scalable architectures.
And so as you can say as you can see, that scalability, that pervasiveness applies very well to these workloads. Okay. The next term that is frequently used and can be confusing, it is a very general term and it does get broadly applied, And that is the word accelerator. So the objective in the world of the data center is to run an application as fast as possible. So accelerate the code.
That's the fundamental objective of any operation inside a data center. So the fastest acceleration of an application is going to be through an instruction set, okay? And we do this all the time. We expand the Intel architecture instruction set. We do this when we identify code that is frequently used or it's a workload that is emerging.
We identify that workload and we integrate it into the actual instruction set. So we expand the Intel architecture instruction set. And an easy example of that at the bottom you can see is AES NI, that's our encryption instruction. So when you have almost 80% of the world's Internet traffic being encrypted, it becomes a pervasive operation. And therefore, we want to make that operation as fast as possible.
And so we turn that into an actual instruction. AVX is another good example. Floating point, those of us that are old enough remember when the floating point unit was actually a second piece of silicon and then we integrated the floating unit in the 80s. And then we said, hey, it is so pervasive floating point operations, it should be accelerated through instructions. So we had the AVX instruction set.
So those are 2 examples. The 2nd fastest acceleration is if you're going to is building purpose built logic and integrating that IP, as we call it, into the CPU. So when the targeted code is encountered then, the core CPU is going to pass control to that IP block, accelerate that code, whatever that code is that's being targeted, and then return control back to the general processor. And having that IP block actually on the CPU makes that execution of the control flow very, very, very fast.
And an
example of that is HEVC. So HEVC is an IP block, an accelerator that we applied into the new Broadwell processor. So it's a recent example. HEVC is the industry standard for video transcoding. And with more and more media servers being deployed, the ability to accelerate transcode is it becomes a big value proposition.
So we created an IP block and integrated into Broadwell. 3rd is if you move that IP block into a multi chip package with the CPU. And so at this point, you would naturally call that an ASIC. It's an independent die. It's created to run a very specific function.
You would do this if the applicability to the broad base, the broad workflow base is low and the product cost of actually integrating it is meaningful. So you wouldn't want to burden the CPU with that incremental product cost. Another application of this would be if you already have an existing product like the FPGA. So you see us with the Ultera FPGA acquisition. Our first approach to the market is to take the existing FPGA die and put it into a multi chip package with the CPU, okay?
The 4th is to have the ASIC outside of the MCP. So just have sit on the coherent bus, so it does get the performance benefit of direct access to the system memory. You would do this if the ASIC was a third party component perhaps. Integration didn't make financial or good business sense. We have instances where our customers are developing their own unique silicon with our proprietary coherent interface.
So we license them that interface. We support their development. We enable them on it. And we do that at no charge. And then the 5th option is the slowest option.
So this is making the accelerator a PCI Express add in card. And this is the approach of a GPGPU. So this solution is the slowest because it doesn't have the benefit of direct access to the system memory where the application is actually running from. And it has the added latency of the hops, we call it hops, between the CPU out to the PCI bus and back. And it's also bandwidth or IO constrained by the limits of PCI Express.
So as to which of these you call on load or offload? Again, different people will say different things. On load can be defined in 2 ways. You can either define it as physical integration, in which case you would say the first two are onload, or you can define it as the control of the programming model itself. If the control of the program stays on the host CPU, then it's on load, in which case you would, in a pure sense, only call the first one a true on load system.
But the key here is that as you move to the right, the benefit and the performance impact of that accelerator reduces, Okay? Okay. So training. Training, in training, you have large, large, large amounts of data that are used to create a model and then applying that data to a select algorithm. And it can take several attempts with several different algorithms before the model actually converges and identifies the desired correlation.
And this is why we talk about, should you embed algorithms into the actual silicon, it's still early days. And this trying of many different algorithms before you actually find one that converges, That element means you really want to keep it on a general purpose CPU. If there are anomalous events missing in the original training data sets, then those anomalies can be accounted for over time by applying more data. And that model then improves with time. So with an autonomous vehicle, we will be able to retrain the model.
As network latencies improve, we'll be able to retrain the model real time based on the real time data acquisition that is coming from the car. And so you're continuously improving the accuracy of that model. In scoring or inference as some folks call it, the trained models are applied to a new data set to generate predictions. So for example, the output of scoring could be predicting an outcome. It could be estimating a projected demand.
It could be the car, it can accurately then identify, is that a pedestrian? Is it a police car, etcetera? Okay. So that's training and scoring and applied to autonomous driving. So with scoring, you're obviously testing one new input against an existing model.
The training process is far, far, far more compute intensive. So with training, that full data set is getting analyzed through multiple iterations. So the Xeon E5 Processor family is the most widely deployed processor for these operations today. And in our 2nd generation of Xeon PHY, which we was code named Knights Landing and we just launched it at the International Supercomputer back in June, It delivers the scalable and highly parallel performance that's optimal for machine learning and certainly deep learning in the training area. So there's tremendous benefit as well in running both the training and the scoring on one architecture.
There's a lot of value in having a common and consistent programming model. So there's reasons why you wouldn't want to train on a GPU and then score on a CPU. So you want to develop your you want your development environment to be consistent for training and scoring if you're going to get an optimal result. And once you've trained an algorithm, you really don't want to go back and have to recode and optimize it. The second value of onload versus offload, as we discussed on the Accelerator page, and then so there's clear value in onload.
And the new Xeon PHY processor is both a bootable host CPU and a co processor. So the algorithms, the machine learning solution is running on the hosted CPU. It is onloaded. This means you don't have those constraints I talked about and the added complexity that comes with an offload accelerator solution. So you and as I showed at the in the accelerator definition, offload results in these additional delays, these additional hops as you transfer control from the CPU to the PCI Express add in card.
An add in card model or an offload model also limits the size of the training, the size of the data set that you can train on. So it's a real constraint. And when you talk about learning, in either machine learning or deep learning, it fundamentally means that the data set is going to continue to grow in size. The larger the data set, the better. And so if you have an offload solution, you're fundamentally limited.
So with an onload solution and an onload solution like Xeon Phi that scales 128 Xeon Phi cluster can reduce that training time, time to train by over 50x. So Xeon and Xeon Phi are inherently scalable architectures well suited for artificial intelligence, machine learning and deep learning. At my keynote yesterday, you might have if you were there, you might have seen the CEO and founder of Indico. He talked about the use of Intel Processors. I also talked about NERSC, the U.
S. Government Department of Energy Research Center and then Mr. Wang from Baidu as well. All of them expressing support for Xeon Phi for machine learning And that machine learning umbrella includes deep learning as a subset. So I disclosed that we have a next generation Xeon Phi, Knight Mill.
It will be available next year. It is also a bootable host CPU, so it is also an on loan solution. And what we've done, back to the accelerator comment or the accelerator slide, is that we've added instructions into the Intel instruction set targeted at deep learning. So we've added variable precision floating point. And this is targeted at neural networks.
So it will improve the performance of running the of training those neural network models, and it will do it with higher efficiency. So this time to train will improve with the Knight's Mill product line. And we have a very long road map of Xeon Phi products, a full family of Knight's solutions. We will and are continuing to invest in the leading edge technology around artificial intelligence. As I said, it is a rapidly evolving space, lots and lots of innovation and exploration going on.
We demonstrated that action last week with the announcement of our definitive agreement to acquire Nirvana. Nirvana is recognized as a leader in the deep learning space. They have wonderful IPs, silicon level IP, software IP and they even have a cloud service that they're running. And they are also just the talent, pulling that deep learning talent into the organization. So it's both IP and expertise that we are excited about and bringing the Intel engineers that have been developing Xeon and ZFI together with the Nirvana engineers that have their own deep learning engine and obviously lots of insight into the market, bringing them together to help advance, our own roadmap.
So autonomous cars, as you all know, you're here and Doug gave a nice introduction on this, they are the future and it will take a phenomenal amount of compute to support them. With a self driving car, the model is obviously created and trained in the data center on a high performance computing cluster on Xeon 5. And then the model is pushed out to the car where the scoring occurs. So we've estimated that to accurately support 20,000 autonomous vehicles requires an exaflop of sustained compute in the data center, exaflop supercomputing capacity. That's for 20,000 cars.
So this level of supercomputing, it's needed. When you think about the millions of sensors that Doug was talking about inside and outside the car, collecting new data all the time, constantly staying aware of the world around the car and the drivers inside the car, coupled with the process of this continuous update of the model. So it's a massive, massive compute situation. And then with autonomous cars coming in the future, BMW, 2 days ago was BK, said 2021 for autonomous vehicles. I think it probably this is probably the last generation of kids that are actually going to even learn how to drive.
It's crazy when you think about it. So there are clearly complexities though in developing a solution for autonomous vehicles. First, the size of the data sets themselves. So a single fleet car can generate over 40 terabytes of data in just 8 hours of driving. The second is that time to train, as I talked about.
So simulation and validation of new code has to happen overnight. It can't take weeks for the models to converge and train. The third is secure sharing of the data. There are some that have been building data sets. Some automakers have been building data sets for many years.
However, many automakers and suppliers know that they alone don't have sufficient data and a means to collect and share that data is critical. And then scalability, as I said, learning comes through this process of bigger and bigger data sets. So the solution needs to scale out. You need a scale out solution. So coincidentally, perhaps, these are the same challenges that the healthcare industry is faced with in their mission of precision medicine.
The size of the genomic data sets for DNA is rooted DNA rooted diseases are very large. The speed to create a valid treatment plan is critical. The fact that these institutions and agencies view their genomic data as their IP. And so the sharing of genome sequences simply doesn't exist today due to security and IP concerns. And the largest cancer institute holds less than 1% of the world's genome sequences.
So the need for massive analysis to reach a valid conclusion in the analysis of DNA rooted disease is critical, just as the massive analysis to reach a valid conclusion for an autonomous vehicle is critical. And so our engagement with the auto industry in our engagement, we actually believe that our work that we've been doing for the past couple of years with the healthcare industry has some pretty good applicability. And you may have heard us talk about it before. We call it the Collaborative Cancer Cloud. And so we partnered with OHSU, Oregon Science and Health Institute, and with their research center.
And we built a solution, and this is a cartoon version, so bear with me here. But we what we have found is we can apply this directly to creating a secure and shared data environment for the carmakers, just as we've done for the health industry. So creating that data exchange. And it's unique to Intel technology, unique to some of the security features we built into the processors. So here's what happens.
So assume you have 3 cancer research institutions, and they want to share and contribute their genome sequences to a data exchange. So you have a researcher who's subscribed to the genomics data exchange service. They want to look across all the sequences for an image that matches the image of the patient's tumor. So the researcher builds a query and then sends that query out to do the compare. Are there any other tumors like this in amongst all the genome sequences?
And if so, what was the treatment and what was the outcome? The result then is sent back to the central function. And the central function upon receipt of the result encrypts the image, uses hardware security features that we've incorporated to create a secure virtual machine. And then that secure virtual machine holds the results of the query. When the data query then does across all sites and the organization is complete, the virtual machine is removed and it eliminates all traces of the data.
So we've enabled all three of these institutes to securely share massive amounts of real world patient data, keeping that data local, secure and controlled. So the solution is up and running at Oregon Health and Science Institute, Dana Farber and the Ontario Institute For Cancer Research. We believe the same type of data exchange will be key to achieving autonomous vehicles. And we're looking forward to applying that technology and having a big impact in the aggregation and sharing of massive amounts of data for autonomous driving. With that, I would like to hand off to Asha Kedi, Vice President of the Next Generation Standards.
Asha?
Thank you, Diane. Hello, everyone, and welcome. Here, I'd like to talk about 2 things, 5 gs and how it applies to cars in particular. So 5 gs is look if I look at the definition of 5 gs, it is what we have to do for a fully connected mobile intelligent society. And this is the first time if you look at like how we define wireless technologies, we're not talking about air interface anymore.
We're talking about the connection of connectivity and the air interface to a society. So it impacts everybody, like my sister, who's a housewife with kids, the microbreweries in Portland, John Doe, the farmer and of course, cars. So to simplify this, we look at how we have 3 cornerstones in this. Okay. So if you look at the 3 cornerstones we have on 5 gs, the first one is massive machine type communications.
This is revolutionary because what happens over here is you have a whole amount of devices that are starting to transmit. This is the foundation. We try to start doing this with LTE, and we have things that are like narrowband IoT and all. But then as we go beyond that, we need things like sensors that help with agriculture, drones, delivery packages. All of these things help us form an intelligent society like smart cities, environments and all.
So the key cornerstone on this is one of scale, billions and billions of sensors, billions and billions of devices. Another cornerstone on this is the continuation of what we call as enhanced mobile broadband. Brian talked a lot 2 days ago about merged reality. All of this notion on 4 ks videos, 8 ks videos, merge reality is more of an evolution of the things we understand very well, LTE, Wi Fi and the information that goes forward from there. And then the 3rd cornerstone that we use to simplify the way we look at things is ultra low latency.
This is something that is very unique to 5 gs where we're looking at how can we have extremely high bandwidth, extremely high reliability and low latency which becomes critical in things like avoiding car crashes, disaster relief, surgeries and aspects like that. If I take a step back, 2 gs was really about voice. With 3 gs, we started doing some things around data. But all of the interactions were us with the phone, us with the PCs, us with the device. It was very interpersonal.
Now with 4 gs, we started when we designed 4 gs in 2,008, the original version, it was an IP replacement system. But then we started meeting the human needs became more. The computational needs became more and the need for information became flow more. So we started evolving LTE to do things like narrowband sensors and things like that all the way to more complicated things. But with 5 gs, we're trying to take a step back and define it from the get go so that the AI interface can focus on these cornerstones.
And we enable devices that are reactive, immersive and the compute back the compute boundaries no longer exist. So I can get the information where I need it. My watch can give entire notion of how or what a device is changing. As we get on with that, right, what happens then is if I look at devices, the way we are trying to standardize this is along three tracks. And Intel is investing on along each of these three tracks and we have a lot of businesses that focus on this.
The first track is around the IoT or the Internet of Things, right? Loosely defined over here, you need things that have long range so that you don't have to worry about how far things goes and all of it. We have a lot of products we have announced on this, including like Category 1 and narrowband IoT and all. And we are laying the foundations on the standards and the prototypes so that we can get into massive machine type communications. The second one is LTE and Wi Fi, which we have put a lot of effort in and they will continue to evolve.
With Wi Fi, it will evolve into densification through 802.11ax IEEE standard. And with Wi Fi, we have done sorry, with LTE. We're moving on from LTE Advanced to LTE Advanced Pros because when you come out when you have engineers make up the names, they don't get very imaginative. So now you have A Pros because we couldn't come up with Advanced. And now we are running out just like we run out of land, we run out of spectrum.
So we have millimeter wave. Used to be considered junk, but that's the only place we can get large chunks of spectrum now. Fortunately for us at Intel, we have been investing in this since 2006 through technology such as WiGig. And so now that we are trying to make it for On July 14, the FCC commissioner did something foundational for the U. S.
And the world. He released spectrum in 28 gigahertz and 37 to 40, which allows us to start having the innovation. And now China, Japan will all do it and it will set the stage for an international effort on these standards. Now I talked something about devices and how like if you watch Ironman, our PCs can become the Jarvis's of tomorrow, right? But this revolution cannot happen without the network.
The network has to fundamentally change. It gets flattened. We have information that flows between devices or fast like at the edge. So because at Intel we have like DCG and the network processing group in there and we have the client teams and the business units, we are uniquely able to take an end to end position over here. So if you look at some other companies, right, the famous ones, they do not have that advantage.
And what we do at Intel is we take things from an end to end point of view. By that I mean I talked about the devices. And then once you have the devices, we standardize it using radio access technologies that we'll define, right, in 3 gsPP and IEEE. We have a lot of effort there and we're working. But we also do this with both ends of the AI interface.
So on the access network side, there's a lot of effort on mobile edge computing where the information goes to the edge, so you don't have to go all the way back. And so you can do a lot of the use cases. We'll, for example, go from the car to the lamppost, which will have a small cell, back to another car or something like that. So there's a lot. And when we design it, we can design the air interface with both ends in mind, and that's what we take when we meet customers.
We continue these throughout the core network and the cloud through the servers and also concepts such as network slicing, Cloud RAN, software defined radios and network function virtualization. And this gets us a lot of credibility because when we go into a room, we have the people who represent all of these areas in there with us. If I look at the journey ahead, no matter how we look at it, it's still very complex. One of the big differences in 5 gs, if you notice, from a cellular industry point of view, is we don't have the concept of a Chinese standard, an American standard, CDMA, LTE, WiMAX and all of it. This is a today it's more than a $3,000,000,000,000 industry, the cellular industry.
And so then the differences and the requirements and all come back into 3 gsPP. That means the intensity by which we have to do things over there is very different. And then you have the China government and the Japanese and the free market notions of U. S. All coming into play in standards.
And standards are the foundation by which we can scale. And we're not scaling across operators, but we're scaling across categories too, right, like cars, like drones. So to do this well, we have a lot of R and D that we are putting into prototypes. These prototypes are both on the client and the network side, and we evolve these prototypes, learn from them to then focus them on the use cases around IoT or millimeter wave or the evolution back into standards. Now we can't do this alone, right?
So we have work we are working with operators and global partnerships in every geo. CMC we have publicly announced all of these too. We have China Mobile in China DOCOMO, for example, in Japan SKT KT in Korea Verizon and AT and T in the U. S, Vodafone, DT and all of it in Europe. We also are working closely with the TEM infrastructure partners.
And then we are expanding to what used to be a triangle of, say, silicon supplier, a telco provider and an operator. Now we start expanding it to, okay, with a CPE provider, with a drone provider, with automobiles, and so we're expanding into verticals as we go beyond the smartphones. We take these global partnerships and then we'll figure out what products we need to build as we evolve and scale them. And to emphasize, Intel is unique in doing this from an end to end point of view. Looking at automobiles, Doug talked a lot about the future of cars and autonomous driving and how when we have that our behaviors start to change.
We can't even predict some of the behaviors that start to change once we get the time back. And this has consequences in terms of the amount of data that leaves the car, what we do in the car and all and puts a lot of load on the network. In addition to what we do in the network, there are a lot of requirements with autonomous driving around low latency, ultra high reliability and all. And that's where a lot of the technology and the communication comes in. Even for like if you want to know that car is even when you dial Uber today you still need the cell phone, right, to go and someone and get all of the transactions.
So the connectivity is very pervasive and adjacencies. As we evolve into these adjacencies, they all start looking at how we interact with the car too, and so it becomes a very different society. And there are 3 main vectors. I think I'm doing a lot of things in threes today or multiples of 3. So the 3 main vectors over here are vehicle to vehicle.
We can start getting information, look ahead, look at what happens before and all, important even for things like platooning, ultra high reliability and low high bandwidth, right? The high bandwidth also becomes important when I need entertainment or immersive experiences that I have at home or work I would expect in cars. If I look at our 5 gs end to end prototype roadmaps, as I mentioned before, we're doing everything in the same standards. So it brings its own dynamics. 2020, Japan announced that they're going to have the first 5 gs like products in their trials in the Olympics.
So then Korea announced that they're going to try to do a lot of it by 2018. Europe then invested a lot in their research programs around Horizon's 2020, and China has its own trials. And Verizon and AT and T then came got U. S. Squarely into the market with a lot of the energy and trials over there.
So but the pools and the requirements in all of these areas are different. For example, what you require in Gangnam when you even can go 13 floors below in park for connectivity is different from what you require in Kansas. And so that means that we have to start looking at how we actually start doing the building blocks across these requirements and the field testing and all. I mentioned that we have spectrum. Spectrum is like the foundational blood over here.
And when you talk about like low frequencies and high frequencies, the work we have to do is kind of different from high end and low end. Just like Hussein Bolt, the training he has to do is very different from Elud from Kenya, right? And these properties means that we can do different things. And we have an entire RFIC road map that we have on both sub-six gigahertz for IoT and broadband evolution and also for millimeter wave. We then have FPGA, which is Alterra based mobile trial platforms.
And because these are FPGA based, what we can do with that is rapidly prototype the different requirements, different SKUs and all. We evolve these platforms so that they are smaller over time but still very, very flexible. And then last but not the least, at different points in time, we will take these things whether they're standards or customized for cards and all and then harden them into baseband ASICs. There's a live demo downstairs and a demo over here that you can catch that talks about all of these concepts later. Now once we have the baseband ASICs, with these three things, we can start creating things that can then go into like cars.
For example, you can put it like the you can put it in the back of the car and the antennas and all, or you can put them into a CPE. We basically can have mix and match and building blocks and trials. We combine these with fallbacks to existing products, including LTE and Wi Fi products. And most importantly, we combine these with the innovation we have on the network infrastructure side so that we can bring new use cases to life to the end consumer. Some of the things that are uniquely different in automotives as we work with vendors, I'll start with the most basic.
When you have a phone or a PC, the RF is attached to the baseband. In a car, that is fundamentally different. So we have to start looking at where to put the RF, the side view to get 3 60 coverage, the side view mirrors, the top of the car, the back of the car, how do we get the antennas to talk back to the baseband ASIC in a quick manner? Things like that start changing. When you're doing trials, what does shock absorption do?
How do you do this high speed millimeter wave at these areas? We have started also to take an outside in approach to 5 gs with the use cases. And so now we have launched what I'll call our automotive trial platform for 5 gs. So this is our 2nd generation of platforms and we are working very diligently to look at how the different use cases and what are the requirements that they put on our automotive trial platform. And these start with the FPGA based platforms and RFICs that I showed and then evolve into like more hardened ASICs that we would do over time.
We have joint test beds on these with key OEMs, infrastructure partners and operators. And we're doing these in all the available spectrums, sub-six, 28, 39 which was released a month ago, and then in the higher frequencies. These vehicular trial concepts then allow us to start putting the pressure to understand what the loads look like, how much of the information has to remain in the car, how much goes back to other cars, the edge and the cloud. Real time map changes, V2X cooperative driving, traffic flows, platooning, entertainment, all of these today start putting a node on what LTE Advanced Pro can achieve. Even simple games like Pokemon GO have suddenly put a strain on the network, and this we can't even predict.
Virginia Woolf famously said, on or about 1910, human cattle changed forever. To quote an FCC commissioner, we can say on or about 2020, maybe she was 100 years too early or the way we change our work with machines changes forever. And so to wrap it up, we have redefined 5 gs in a different way and we have in terms to enable our fully connected mobile intelligent society, we will take an end to end approach, which is very different from others. We will bring with that our compute and connected cars vision to life. We are taking an outside in approach, working with car manufacturers, operators and telcos to adapt and evolve the use cases so that we learn with them and they we get their buy in as we evolve the platforms.
And we will do all of the economies of scale through standards leadership. Thank you. With that, I would like to invite Doug Fisher, Senior Vice President and Bazaar of Software.
All right. Good morning. I am super happy to be here because I get to talk about a topic I love and also because I almost ended up in the hospital about 45 minutes ago. I was reading my e mail on my cell phone when I walked back stage and I thought I should look and see where I'm going, so I pulled the phone away And what do you see? Nothing but a bright white light.
Just as I stepped on one of those little roller carts for crates and I went sliding backstage, So I think I found a new application for collision avoidance. We can add them to the cell phone and that will explain the scream you may have heard earlier. All right. So let's get going. So I'm going to talk to you about software.
That's something near and dear to my heart and I love. And I think it is the best hidden asset we have at Intel, an asset that we can bring to bear that nobody else in the industry can. We bring this end to end. We have a broad, broad set of capabilities at Intel and Software that go from the things, all the devices, the wearables, the drones, the robots, PCs, all the way through the network to the data center, the amount of software, the breadth of software we have at Intel and the capabilities we have are unmatched in the industry. We also bring depth, deep competency in these areas to bring value to the vertical segments we're talking about today, which is the automobile and autonomous driving.
That kind of depth is unmatched. And I'm going to talk to you about how we apply that to this industry. I thought Diane's example was perfect, where she talked about how they've transformed the capabilities in healthcare and our planet to the automobile. That's exactly what we do in software, where we bring that competency to bear in this market. So I thought the easiest way to explain this because there's so much to talk about, I thought I'd talk about, these verticals, these 7 different vertical areas that we focus on and the horizontal capabilities that we apply to this vertical segment.
And the first one, an important one is simulation. Simulation is where we actually work with partners before our silicon is even available. We build a simulation environment that actually emulates our silicon before it's even produced. We use this to build critical software on top of our platforms. This accelerates the rate at which you can put these devices into market.
We've done this for years in the BIOS area and the operating system area, but we're extending that to the application space now. For example, we work with NASA and their unmanned space vehicle. Mission critical software was validated on our simulation environment before they actually deployed that vehicle, allowing them to look for any critical situations or errors in their code long before they got to silicon. That's a great competitive advantage we have. When we apply your solution on top of LTL architecture, you now have a simulation environment to validate critical components before they're deployed.
That can be applied directly to the automotive industry, and that's what we're expecting to see happen. There's also a system you build in the automobile, and there's components that go together from the CPU to the network to the data center. We also have a modeling capability called Cofluent that allows you to model that out and determine how your prediction will come to bear. It shows you how your actual prediction of that system network will actually operate. And it gives you an opportunity to look for those bottlenecks, maybe under scope certain areas of that system and you have to apply more resources because it won't run as efficiently as you thought.
So we provide that modeling tool to the industry as well. Now this is something you hear a lot about, which is security. Security is going to be absolutely foundational in the automobile. You heard it when Brian talked with BMW and how important security is to them. It's important to everybody in the automotive industry.
And we need to make sure that we do our part in providing secure components that can be developed with your application to secure that automobile. When the systems boot, Doug talked about the number of systems within that vehicle. It's a systems of systems. When they boot, you want to assure that nobody has tampered with them. That's why we provide secure boot.
We provide that capability to ensure that that system has not been tampered with when it starts up. You have critical applications that you want to ensure that aren't tampered with. Mission critical applications, we provide a trusted execution environment to give you an enclave to execute that application, again, to provide that additional security to help protect that critical application. Doug talked about the gigabytes of storage that are on the vehicle. You have to encrypt that data and provide secure storage so that it cannot be compromised.
And then finally, something that we're really excited about is our enhanced privacy ID. There's over a 1,000,000,000 devices already in the market with this capability. We're providing it to the industry. It gives you an anonymous way to identify the component. It's anonymous, it's anonymized, so nobody knows who it is, but they know what they should be talking to.
This has many, many applications. Government officials that certain ones only have the right to enter a certain area or park in a certain location, you can identify if that vehicle is in the right place. I'll talk about other applications later on, but that's going to be a critical element that people can use, developers can use to take advantage of to ensure that whatever they want to talk to, they're talking to the right element. In September 2015, we started the Automotive Security Review Board to ensure that we looked at security capabilities and best known methods and educate the rest of the automotive industry. We're at the leading edge of this new movement to connected smart cars, autonomous driving.
And so we need to make sure that we provide the best capabilities and the best known methods for those building these kind of automobiles. We released our 2nd white paper last week with the input from these experts in industry talking about how these systems should be put together and the things to consider as you're building a secure environment. So we're helping drive and accelerate that education in our industry. Another horizontal capability that we're well known for is our work in the operating environment. We have deep, deep competency in working in the operating environments across the board.
When you look at what we do in the onboard chassis or the software defined cockpit, Intel has deep competency in Linux, deep, deep competency in Linux. This is why we built this tool called Yocto. We recognize that the industry was fragmented in its approach in building embedded Linux operating environments. So we built a tool called Yocto that helps you do a much better job of building embedded operating environments. Over 50% of the industry is using Yocto now to deploy embedded Linux and continues to grow.
BMW uses further in car capabilities already in distributing their own Linux version within their cars. We expect other manufacturers across the automotive industry to take advantage of this, just like the broader embedded community has as well. We do know that Android is going to have a big impact on the automotive industry. They started adding automotive capabilities in the N dessert, N Cars Coming, it's going to be merged into the O dessert. I've stood up on stage many times and told you how great our capability is at Android, that horizontal capability, and we're applying it to the automotive industry.
The automotive industry is going to start utilizing Android, and I can tell you, we've been working with several of those people already. One of the letters that came to me from Doug was a thank you, Doug Davis was a thank you. I don't sell myself thank you, you know this often. Well, thank you, Doug, because the work you do in Android has actually helped secure this design win. It's helped us because you guys move so quickly to show them the value of Intel architecture using Android.
And so that's that horizontal capability we're applying to the automotive industry. And then Wind River has a suite of capabilities and a professional services organization that works directly with the automotive industry. Called Helix Chassis and has several tool suites within that that help apply to the automotive industry today. For example, if you're writing an application, you want to be able to connect to the external ecosystem, it has communication capabilities to ensure that outside world connects with the car. It also has certified toolkits that help you build applications that are ISO certified to operate within a car, giving you that assurance that you're doing the right things when you build an application for the vehicle itself.
When it comes to autonomous driving, you cannot have an operating environment that doesn't respond quickly. We call this preemptive kernel or real time OS. What that means is when you do something that needs to respond quickly, it has to be preemptive. We take that deep competency, the number one contributor to Linux, and then we apply that in this space as well. We're working to put what we call preempt RT into the Linux kernel.
So we're upstreaming that capability so that's automotive ready. We're looking at FUSO standards and driving those upstream using the competence that we have in Linux to apply it to the automotive industry to ensure that we have that real time capability in the operating environment. Wind River has had a real time OS, industrial capable OS for years. My first job out of college, they were my competitor, when I was working in another company. VxWorks has been and I've been out for a few years.
It's been in the market for a long time and it has automotive capabilities built in real time capabilities within VxWorks as well. And the final thing in the operating environment, Doug talked about it, which was there's over 150 ECUs in the automobile today. In 2025, that's expected to go to around 50, reduced by 65%. How do you do that? You virtualize that environment.
So that's why we're investing in this environment on virtualization to ensure that workloads can be shared across a compute unit. So we're deepening our virtualization capabilities, already known about KVM, Zen and then on the network, open NFV. So we continue to look at opportunities to virtualize that environment to ensure that we can help consolidate workloads and that they're optimized on Intel architecture. I talked about vehicles communicating with each other. The other thing that we have done is looked at what's happening in the Internet of Things, the broader capability, and that has been around communicating with each other.
There's a lots of talk about how you have to the things have to talk to the data center. But we also recognize that things are going to talk to each other. And in order for things to talk to each other without going back to the data center, you have to have standard capabilities to making sure that happens. It's well publicized. There was two approaches to making that happen.
For this year, we aligned those two approaches and formed OCF. OCF is building that standard definition for provisioning, managing and securing that communication between things, which is so important to have that value exchange between devices in a secure and provisioned way. IoTivity is the instantiation of that. This is the open source project that's actually building the reference implementation for others to bring to market. And I'm happy to say this last week we added a new project around automotive to take advantage of the reference capabilities and the requirements of the automotive industry and applying that into an implementation and standard.
So we're extending that of things to take advantage of the unique requirements of the automotive industry and applying it in the standards. We also were a founding member last year of OpenFog. Again, Doug talked about the need for information to flow back and forth between the data center and the device through the network. OpenFog helps set those standards to ensure that you manage provision and secure that information going back and forth. So we're driving that standard as well.
This will help accelerate innovation. We absolutely believe these standards will help accelerate the pace of innovation, allowing others to innovate around this. Now how long do you guys usually own a vehicle? Some of you probably 2 to 4 years, others like me closer to 10 ish. When you have a vehicle and you deliver a vehicle, you want to make sure that 2 things happen.
As you understand new security gaps, because these cars are going to have over 100,000,000 lines of software code, over 100,000,000, some of them closer to 300,000,000 lines of software code. There is going to be opportunity to update that code as they discover vulnerabilities. You have to have a robust over the air update capability to ensure that vehicle is always updated with the latest technology. You're not going to have the luxury of bringing those vehicles into the shop every time you need to update them. You have to have the ability to do that over the air.
And that's what we make sure we take that horizontal capability we have and applying it to the automotive industry. Wind River as part of their Helix chassis has Helix CarSync, which is an example of that capability that's available today, allowing automotive manufacturers to update that automobile with the latest software or firmware. There's also service opportunities. The ability to address a new capability and deliver that service to the car and giving them an opportunity to have value added services that they can participate with that owner of the car down the line. It gives them an opportunity for revenue generation as they put that service on the vehicle.
You guys all remember when the iPhone became popular, what was the biggest struggle car manufacturers had? How do they get to support that? And the cars that did became more popular. There's going to be more services. Who knows?
We all may be driving Pokemon cars. I don't know. But there's going to be services that you want to shove down, I know, down to the car and they're allowed to do that now. Remember I mentioned EPEN. EPEN is an example of a technology that helps you identify, am I talking to the right vehicle?
Is whoever talking to me, is it the right services talking to me? It provides that secure attestation between the two to ensure that you're actually communicating with each other and you should be and you're getting the information you should be getting at the right time. Diane did a fabulous job of talking about artificial intelligence to give you the definition, so thank you. Artificial Intelligence is going to be absolutely paramount in the vehicle. We refer to this subset called deep learning.
And I told you we have deep competency in software. What we're doing with that is engaging with the deep learning ecosystem and optimizing those frameworks that data scientists utilize to train their algorithms. They use frameworks like Torch, Theano, Caffe and others to train their models. The data scientists sets their engagement point are these frameworks. So we're engaging with each one of these and optimizing to take full advantage of Intel architecture, including incorporating our map kernel library.
We extended that with deep neural network capabilities to actually accelerate the performance of these frameworks. So we're actually going to provide optimized frameworks that run on Intel architecture across a broad set of platforms that we deliver. So as we talked about, there's Adam all the way to the high end Xeon to Zeon PHY and the FPGAs. We're going to deliver those optimized frameworks for all of those environments. What's that mean?
The data scientists doesn't need to change how they work, how they engage with that framework. All they need to do is ensure that they deliver and deploy the most highest performant operating platform in the world. Diane talked about the importance of Intel architecture in this space because they need to scale these large, large workloads. That's why we optimize data via Hadoop and Spark, which helps scale these workloads with those big data sets that are coming from the car. As you train, you have to train quickly, time to train.
And that's why we optimize Hadoop as far to accelerate the training, so these data scientists will get their models trained much more quickly. We have analytic tools that help accelerate their analysis capabilities, and we have an open platform called TAP that allows them to build analytic capabilities. So we have a deep competency in helping accelerate analytic capabilities on our platform. We all know about some of our work in computer vision. We have an SDK and we've worked in computer vision for a long time.
We're now happy to, I guess, preannounce and we're going to rename this. This isn't our final name, but we have the Intel Deep Learning SDK. And it combines the capability of our computer vision along with the ability to take that trained model that you've done over and over and iterated, as Diane said, at very, very deep levels, but you need to move that to a consistent architecture for scoring. You need to have a tool that takes that, compresses that and allows it to be deployed on an endpoint like an automobile. And then you score it, make decisions, inferences in that automobile.
End of this year, we will have that tool available. We're going to be beta testing it in a few months and deploying it as a final product by the end of this year. So we'll have the necessary tools now for all the data scientists to train their model and then be able to deploy it on an endpoint for scoring. This is really exciting stuff, and we're going to continue to evolve that tool set as we go forward, take advantage of our deep competency in building tools at Intel. In addition, we're going to be working with the thought leaders in the academic world, ensuring that we have projects and joint innovation together because this is not going to stand still.
There's going to be new frameworks, new algorithms, new capabilities. Dan talked about they're going to drive those capabilities into the silicon. We're also going to drive those into the tools. The neat thing about what we do in our world is that deep learning SDK, regardless if there's a new framework started tomorrow or a year from now, it will utilize that tool set and will not need to change. So it's going to be apply consistency as this world evolves.
So we're going to stay ahead of the curve, stay close with the advanced technologies and ensure that we incorporate it into our solutions from our silicon to our software. And how do we do that? How do we engage? It's developers. Developers have to have a great experience working with us.
That experience is so critical. This may come as a shock to you, but we engage with over 16,000,000 developers, one of the largest development environments in the world. We work with thousands of ISVs. We're taking that strength and we're building that same capability for machine learning and deep learning as a part of a subset of artificial intelligence, engaging with them with our latest techniques, best known practices, experts in the industry, it's where they get the tools free of charge that I described and the updated frameworks are optimized for Intel architecture. The websites are listed below.
This is where the jump off point and how we engage with that big developer ecosystem and make sure that they take full advantage, take that optimized framework, our tool set and apply it to Intel architecture end to end. I've stood up here many times and said, the thing I love about doing software at Intel is we have an architecture. We don't have an instruction set. We have an architecture. That allows whatever we do in software to be applied from Adam to Xeon and get the best capabilities out of it.
So at Intel, I'm very proud of the broad capabilities we have in software and how we can apply those deeply to these vertical markets like the automobile and autonomous driving. We are going to bring that competency to market, and that's why we believe we're the best in the industry in making this happen. So thank you very much. And with that, I'll bring back up Doug to drive us
home.
So I want to say thanks to Diane and Asha and Doug for giving us that overview. And I want to kind of now pull everything together that you've heard around why Intel wins in autonomous vehicles. So I encouraged you earlier to think about the amount of computing that will ship in relation to the number of cars on an annual basis. As that technology evolves, we'll continue to see that the compute needs will increase dramatically. And as you've heard, as those algorithms mature, we'll see compute moving into things like the FPGA.
But we're also going to need tremendous amount of computing that will exist within Xeon Processors and require that Xeon level of performance. Today, cars have about $100 to $200 of Intel addressable silicon BOM, and that's based on in vehicle infotainment, instrument cluster, rear seat entertainment and those other kinds of functions that we referred to in this thing we call software defined cockpit. By the year 2025, however, Intel's opportunity will grow significantly, as you can see on the slide, 10x to 15x as we begin to expand and include autonomous features. An L5 car in the future will become multiple high end Xeons, high performance FPGAs, memory and in car high connectivity bandwidth in addition to connectivity back to the data center or cloud. There are also those that are looking at that level of compute going into cars, and I'm sure you're asking the question yourself going, how is that ultimately going to be cost effective?
This is an enormous amount of computing. What's going to happen from a cost standpoint? So I thought, well, let's take a look at what's been happening over the past decade with respect to the broader Internet of Things and the enabling technologies for those kinds of applications. The cost of sensors has come down about 2x in the last 10 years, but the cost of connectivity has come down about 40x in the last 10 years. And the cost of computing, thanks to Moore's Law, has come down about 60x in the last 10 years.
And why is that possible? Well, obviously, Moore's Law, which is continuing to drive down the cost of computing and now more and more things like the cost of connectivity as we continue to scale the cost of the transistor. And we see that that's the key to making these kinds of capabilities cost effective in cars that will be on the road over the next 5 to 10 years. Russell often asks, hey, is Intel a credible long term supplier for the automotive industry? But I wanted to share a little bit more insight with you along those lines as well.
We do have a long history focused in the automotive space. We've been delivering in vehicle infotainment systems and increasingly new applications like instrument cluster. And I mentioned earlier, more and more of those pixels that need to be driven within the vehicle. And we defined all this as software defined cockpit. And as we look across our Intel silicon and our processors that we deliver, Wind River Software that you heard Doug talk about, Alterra FPGAs.
We have 49 design wins with various different OEMs today in this space. We're also working on solutions to fulfill those design wins with 33 different implementations from Tier 1 suppliers. We have over 30 different vehicle models that are on the road today. So then as we look at that overall footprint, we see it continuing to grow as well. We booked over $1,000,000,000 in new design win revenue just in the last 12 months in this space.
But okay, let's look at what's happening in the autonomous world as well. If we look at our engagements there, we have 19 different OEM designs in some level of development. We have engagements again with 9 Tier 1 suppliers, and we're working with 59 different partners within the ecosystem. And this includes software vendors, hardware vendors, integrators that are all part of that ecosystem needed to create these kinds of solutions for an OEM. So from an Agile perspective, I think it's clear that we have a growing business in the automotive space.
We understand the dynamics within this industry and what it takes to be able to deliver on the needs of our customers. We've built the relationships within the ecosystem and with all the Tier 1s and OEMs that are the leaders in this category of products. And finally, as I indicated, we've got some fast growing engagements in the autonomous driving space as well. You saw that another example this morning. So as these autonomous vehicles begin to evolve, there are other things that we have to think about as well, right?
There are other aspects that are going to become important. Think about smart cities, right? They're going to need to adapt to what's happening with autonomous vehicles. We're going to have autonomous vehicles on the road. At the same time, we're going to have vehicles still driven by us humans, right?
So the cities will need to be able to adapt and handle that kind of crossover phase. But there are huge opportunities to build in smart city infrastructure that autonomous vehicles can take care take advantage of, right? So think about like smart parking. We have smart parking infrastructure in a city. Hey, us human beings don't need to drive around anymore looking for a parking space, right?
We can tell the car, go park, right, and go off to our meeting or go to the movie or whatever it happens to be. But even with that infrastructure during this crossover phase, even us humans who are still driving cars, I'm kind of like Doug, I keep cars for a long time, we'll have that same infrastructure and capability in an app so that we don't have to spend time driving around looking for parking. We know exactly where it is in the closest proximity to where we are. I think it's also obvious that government regulations will play a big part in the evolution in this space and how these cars get developed over the next few years and then ultimately be put on the road. And of course, the government regulators that we've been talking to and we're interfacing with regularly have 3 key critical things that they always want to talk about.
They want to talk about, of course, safety. They want to talk about security and some of the things Doug talked about and privacy, right? A lot of data there that we need to be concerned about. And Intel has a leading role in helping shape, form and even influence how these regulations are being created. For example, there are ongoing efforts to define the global build out in 5 gs.
You heard Asha talking about that. And this will be critical for how these vehicles talk to each other and how they talk to other things in their environment around them. And there are some important milestones coming up in the U. S. Just in the next year.
In 2017, we expect the U. S. Congress is going to be tackling 3 really important topics. They're going to be talking a lot about security and privacy and how that relates to in vehicle data. They'll be talking about liability for autonomous vehicle accidents, right?
That's a rich topic that's already getting a lot of discussion. But they're also going to be looking at uniform state rules for autonomous vehicle testing and deployments. And in the following year, in 2018, Intel is already starting kind of an initial foundational advocacy for the 2020 reauthorization of a really important transportation funding bill called the FAST Act. So we have a pretty rich set of things that we're engaged in to make sure that from a regulatory standpoint, we're able to continue to innovate and develop these kind of applications. And then as I said, safety and security are critically important as well, but obviously very important to us.
Doug touched on that, and we're continuing to build our capabilities around a class of technologies called functional safety. And this is integration of technologies that ensure safe vehicles in the way that they operate and the way that they deal with anomalies within the system. And then we're also working across the industry to create capabilities to harden systems, make them less susceptible to hacking. And Doug talked a little bit about the Automotive Security Review Board that we helped create. And of course, we're continuing to invest within Intel as well, right?
We recently announced the strategic partnership between Intel, BMW and Mobileye to not only create BMW's highly autonomous and fully autonomous vehicles that they'll be launching in 2018 2021, respectively, we're also investing to support a partnership that will enable open and standard interfaces within the industry. And it's these types of open and standard interfaces that will help to accelerate the pace of innovation, right, to allow the innovation to progress at the rate of technology development, while still at the same time allowing the OEMs to be able to differentiate the product that they provide to their customers. At Intel, we believe the best way to learn is just jump in and start doing it, right? That's how you're going to get the most insight. And of course, you've seen in the back of the room, we're building our own fleet of cars.
You can take a look if you haven't had a chance already to see what we're building and understand why this is so important. And these cars are out collecting data, and it's helping us to get a data set that we can use as we develop our machine learning and deep learning capabilities. We're going to be doing this in several different cities in both the U. S. And in Europe.
We're also working with the Intel Labs. They're doing some great things around user experience, right? It's going to help us understand again how us humans are going to interact with these kind of vehicles. If you're sitting in your car riding across town, what are you going to do with all that time? What's that experience going to be like?
What kinds of capabilities do we need to be able to build in for that rich media experience? We need to help people feel comfortable in those vehicles, right? I mean, I mentioned earlier, back seat driver, right? That person in the back seat going, hey, do you see that bicyclist? Well, we're sitting in the car and we see that bicyclist and we catch a glimpse of them, we're going, does the car see that person, right?
So understanding how we interact from that perspective. The other is being outside the vehicle, right? If we're about to cross the crosswalk and this autonomous vehicle is coming up, does it see us, right? We're still going to inherently be concerned about that. And so the Intel Labs is doing a lot of work in this regard.
They brought a system with them that we're using to be able to test and understand those experiences. So be sure and take a look at what they're doing. And we're also going to continue to acquire critical assets from a silicon software and technology standpoint. We'll continue to add different elements that can either enhance what Intel is building or fill in pieces that we need as we create these end to end solutions. And then finally, of course, we'll continue to invest with the ecosystem through the Connected Car Fund that we created several years ago.
So we talked a lot about Intel's ability to deliver the end to end assets that are necessary for autonomous driving. You heard Diane talk about artificial intelligence and machine learning and the deep rich capabilities that we have there. Asha explained why 5 gs is so critical to these solutions and the importance of that transition and the timing of that transition. Doug talked about the depth and breadth of the software competencies that span our entire portfolio. And of course, the leadership position that we have in compute and machine learning from car to cloud, the broad connectivity portfolio, the software ecosystem, the tools, the unmatched scale that we have compared to others in the industry to create that autonomous driving experience from end to end.
And we're a proven builder of ecosystems that are committed to creating open and standard interfaces that will be essential to help this market move forward at the pace that technology is evolving. Intel is the only supplier with the end to end architecture and the assets necessary to meet these kind of autonomous driving requirements. So with that, I'm going to kind of wrap up. I'm going to ask Bhargavi to come back up and join us on stage and give you a chance to ask some questions.
I'm going to invite the presenters back on stage for a panel Q and A, please. Mark and Trey have mic, so we'll field questions after we set up.
Hi, thanks for hosting the today's very informative presentation. I'm hoping you can explain a little bit about the partnership with Mobileye to serve autonomous driving for BMW. I think they're looking at doing volume by 2020. And today's presentation was suggested to me that maybe you don't even need Mobileye. So I'd love to hear about how the 2 companies are going to work together.
Okay. So I cut bits and pieces, there's a bit of an echo up here. So if I don't get it all, we'll let you follow-up on that. But that relationship is a significant strategic relationship between the 3 companies, not only to build the vehicles that BMW has said highly autonomous by 2018, fully autonomous by 2021. And it brings together the essential capabilities to be able to deliver on what their requirements are in the time frame that they need to be able to meet those.
Clearly, from an Intel perspective, we play a significant role in the compute capability. Mobileye obviously plays an important role in all of that camera and sensor data and then we fuse it together to create that solution. But also what's important here is the focus on creating open standard interfaces within the industry, right? We see that this technology needs to move very rapidly, and we share that vision of the need to develop and then create elements out into the industry so that open platform can be built upon by others and advance rapidly.
Thanks. And then one follow-up if I can. You showed quite a bit about 5 gs relative to autonomous driving including in V2X for which there are already some deployments that use DSRC. Is Intel's expectation that DSRC and 5 gs coexist or that DSRC goes away in the autos market?
Yes, it's a great question. I'll start and I'll let Asha help me out with But DSRC, if you kind of step back and look at the history of that, that's a technology that's been in development for 17 years, right? And so we're still at a stage prior to full deployment of that. And then meanwhile, we see that 5 gs is coming along with a set of capabilities, not only that can support vehicle to vehicle, but vehicle to I think, Asha called it V2X, right? Vehicle to maybe me walking down the street and crossing across the intersection, but my phone can indicate to the car that I'm here, right?
And so the ability for 5 gs to support vehicle to vehicle and vehicle to other things is a technology that's emerging really quickly. And so from our perspective, we think it's well suited to be able to fulfill not only that very particular application but also a broader set.
And I think just to close out, the market is evolving rapidly. You have pedestrians, you have bicycles and all. And so you have to take them into account. And then we will be as since we are defining with a rapidly evolving society, the solutions will be evolving rapidly over time as we do 5 gs, too.
I wonder if you talk about these in vehicle functions that you list here. Do you view that as sort of one central CPU to solve all of those problems? Or is it a lot of different subsystems? And also what's the power budget that you have to work with, the power consumption within a car?
Yes. Very specific question for kind of a broad set of applications. Today, as you heard me say, the way most of those functions are being deployed are individual ECUs. So I have an ECU for instrument cluster, an ECU for the infotainment system, an ECU for analog break. So the way this has evolved, in order to deliver those incremental functions across lots of different models, right, they're individual ECUs.
So obviously, different power budgets for each of those. And if think about instrument cluster, it's less about power dissipation. We have big battery in the car, right? It's much more about what is the thermal budget I have available given dissipation of thermal energy. And that's typically what's defining each of these in relation to the amount of performance that needs to be delivered.
As I talked about earlier, the way and Doug mentioned as well, we see there's an opportunity to start to consolidate that, and that will help both from a compute, total compute standpoint and overall thermal management as well.
Okay. Hi. Two questions. One is, you mentioned some of the what you're bringing to the BMW partnership with Mobileye. If you could not sure how specific you want to be, but in terms of sort of what would be running on Xeon versus FPGA versus Mobileye in that vehicle?
And then as a follow on, as we look through some of the prototype vehicles, not a lot of room in the trunk for golf clubs.
That's important.
Yes, important. So as you go on, obviously, Moore's Law, vehicles in 2021, your compute capabilities are going to be much more. How do you see this once the first vehicles actually come out in terms of I guess the question is power budget and kind of size and compute capability based on what's probably going to be available in 2021?
Yes, great. So the two questions, How do you break out the very specific workloads in the implementation of BMW system? Yes, we haven't tried to go and disclose that publicly. I think that's part of what we're going to do is we're developing these platforms, how do we optimize, right, each according to the strengths that we can bring to that implementation. We haven't broken that out.
But your reference to, hey, there's a lot that goes into the back of the car to be able to support those kinds of functions. Yes, but that's also the beauty of Moore's Law, right, is that we can we'll continue to deliver more compute either in a fixed footprint or as we have maybe a larger power budget to do different things, as we consolidate these different workloads around the car, we can deliver more compute capacity. I was talking to one of our engineers just last night and he says, hey, look inside, I think it was the BMW, he says, look how much cleaner this is, right? And so in the matter of just a couple of months, we've continued to find ways to refine that and we're going to continue to do that. I think you're going to see that the size of that implementation move very, very quickly.
Couple of questions on revenue basis. How should we think of the amount of BOM in a car right now from a semiconductor perspective? You talked about 150 ECUs going down to 50. So how does that evolve? What percentage do you think Intel can gather from that?
What do you have now if you're willing to share? I doubt you are, but that's okay. I'll ask. But in general, how do we think about semiconductor revenue in a car and how does that change?
Yes, it's great. There are really kind of 2 things that are happening at the same time, right? So a number of these use, there are a lot of them in a car. We think those will consolidate, but that will go from maybe an infotainment system or an instrument cluster running on an atom processor, right? And so you saw some of the number of vehicles shipping.
As that number of vehicles continues to grow, obviously, we have a bigger volume opportunity over the next few years. But then as we can start to consolidate more and more of those functions and workloads, the amount of compute needed to support more functions virtualized on the platform will go up as well and that will start to move us into higher performance computing. So we'll see that transition happening. And we think that the combination of both creates more and more opportunity for Intel as we put more compute into a single footprint and we can virtualize those workloads.
Yes. Is there a baseline actual dollar figure right now for semiconductors in these autonomous cars or even in non autonomous cars?
In terms of a TAM, we didn't include that. We can follow-up and point you to some sources that will help you with that.
Thank you. What I've heard so far, what you seem to have described is specific Intel capabilities and products that we understand, Xeon and so on, that could be used in cars. Will you be rolling out a specific platform for automobiles where there is a name of their tailored products or tailored interfaces for auto applications?
Certainly, there are some of the interfaces that are important in these particular applications. So in the next few years, we'll see an interface called TSN, Time Synchronized Networking. We expect that to become a common standard within automotive applications over the next few years. By the way, we'll also become a standard within industrial applications. So we're going to get scalability across multiple different industries as that occurs.
And as you might imagine, in order to fulfill the requirements in these kinds of applications, will create particular SKUs of our products that meet the temperature requirements, that have the built in functional safety capabilities, but not entirely new processors, more of adaptations to our standard products, right? We get that adjacency of the investments that are being made in the data center product portfolio or will create those necessary temperature capabilities within the products we build within our wireless technology portfolio. So it's more about taking standard products and adapting them for the environmental conditions or maybe some specific interfaces that will be necessary in these industries.
Yes, very high reuse.
Yes.
Very high reuse and between the 2.
Absolutely.
Thank you. Chris Jones from Canalys. So you're building your own fleet of cars. You'll be testing in different cities around Europe and the U. S.
Can you give a bit more detail how many cars, how many cities? And also what you need to do to initiate road testing in any particular state or country? Do you need a permit, etcetera? Thank you.
Yes. We haven't disclosed all of the cities, the number of cars. It's something we're still working on as we continue to evolve our plans for how we go about building out that level of learning. One of the things you touched on, I think, though that is important to realize is how are cities responding to that, right? We're having very good conversations with several cities in the U.
S. As well as in Europe. Not only do we need to understand what's going to be particular to that environment, but also obviously make sure that we were well coordinated with them in terms of what their expectations would be. But it's important for us to be able to do this not only to see maybe some different environmental conditions, but also different roadway signs, different laws, different types of environments that we'll need to be able to understand.
Great. Okay.
Well, thank you all for joining us, and thank you to our presenters. We invite you to enjoy our demo showcase, and we also have lunch at the back. And thanks for everyone on the webcast.