Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.31
+0.77 (0.93%)
Apr 27, 2026, 12:25 PM EDT - Market open
← View all transcripts

Artificial Intelligence Day 2016

Nov 17, 2016

Speaker 1

Ladies and gentlemen, please welcome Intel CEO, Brian Krasanich.

Speaker 2

Good afternoon, everybody. Welcome to Intel's 1st Artificial Intelligence Day or as we like to call it AI Day. I think Diane and team have put together a really great day today to really allow for a set of conversations that will both describe a little bit about our vision of artificial intelligence, but also give you a little more detail about what really is encompassed when people use the terms of artificial intelligence and what does that mean for technology. So with that, let's get right into it. Today, what I'm going to talk about is 3 things.

And the first I thought we'd do is spend a little time talking about Intel's vision around artificial intelligence and really kind of explain what the different meanings of artificial intelligence are. The second is talk about how AI is really becoming pervasive and how we think about that and how we're looking at that across our product portfolio and across the industries we serve. And then finally, how Intel is really trying to lead and be a catalyst for AI and accelerate it, move it faster. We believe it can bring great attributes and great value to businesses and operations and to the silicon we provide. So let's get right into the first topic for today.

So for us, we really look at this as that Intel is continuing to evolve and really change the way we look at products and the way we look at technology. We're really evolving to become a company that tries to power all the billions of smart and connected devices. And so as you look at that, you really have to think about how important artificial intelligence is and how it becomes a critical component in those connected devices. Those devices really become just noise without some form of artificial intelligence to allow us to understand and really correlate that data. So to start with, I thought we'd take a look at just how machine learning, deep learning and artificial intelligence, those are terms often used, how those all are interconnected and what they really mean.

So if we could just roll our video really quick, it will help us.

Speaker 3

The

Speaker 4

between these three key concepts is fairly simple. Artificial Intelligence refers to the broad category of computing in which a system is capable of learning directly from data. A capability made possible by the advent of data analytics, and applied either through a set of rules that are evolved over time through human intervention with the system, or increasingly through the process of machine learning, in which the system expands its ability to process and use broader concept of artificial intelligence has been around since the 1950s, It was the introduction of machine learning in the 80s that brought the first real wave of significant advancements to the field. Then, as computational performance reached a tipping point in the last decade, a new wave of even more dramatic breakthroughs began to gather momentum. Deep

Speaker 1

learning is

Speaker 4

a branch

Speaker 5

or subset of machine learning that leverages

Speaker 4

those computational advantages and a set of new techniques such as neural networks to make sense of and learn from even larger amounts of data. The result has been an impressive acceleration in the progress of elusive artificial intelligence capabilities. Capabilities like image, speech and natural language recognition that promise to have a significant positive impact on our everyday lives. The emergence of deep learning also serves as a reminder that the technology and techniques for achieving the full

Speaker 2

So hopefully that gave you some insight. Those are terms often used in industry and communication areas, but often not well understood in my opinion and not well really managed. So what we're looking at is really investing through our technology and through acquisitions to really build and fuel artificial intelligence across everything we do. And as you look at this, what we call virtuous cycle, what you see is that the things at the bottom are the things that are out there collecting the data. And as those things continue to collect data and push that data to the cloud, it's the artificial intelligence that's going to occur on both edges of this ecosystem that's important, Allowing the things at the edge to use smaller sets of data to make more compact decisions, but decisions that are important and need to be done more real time.

And those things in the cloud that can be done with a much richer and much more volumetric set of data, but they can actually change how those things at the edge then behave and change and adjust to the environment around them. We're leading these computer transformations through innovation with our Xeon products, things like RealSense that allow those things at the edge to see and view the world with computer vision and FPGAs. But to do that, really data analytics is a critical component of that. And that ecosystem is really meaningless without that analytics applied to it. All of these things coming together then deliver an end to end AI solution.

The data, the connectivity and the analytics all provide what's really the AI environment. And we believe that only with Intel can you get a range of computer solutions in a data center from general purpose targeted silicon to computer vision silicon, memory and storage and communication assets. All of these things can come together to form an ecosystem that allows you to put in a complete AI solution. Now with all the innovation that's occurring in the assets that Intel has, we really need to take the next step, which is how do we plan to make AI pervasive? And that's really the next question I want to address here.

So as you saw in the video and as we all know who are in this industry, AI is really still in its infancy. The algorithms that are being used are changing every day. Data sets are becoming richer and more complex. And the number of things at the edge are continuing to be increased and added. And the diversity of those things that are collecting data and the complexity of them is increasing every day allowing the algorithms as well to change.

Now Intel Xeon Processors with GPUs are being used for deep learning, which is a small, but a rapidly growing subset as you saw in that video of machine learning. The GPU architecture, however, does not have a unique advantage for AI. It's not the only solution that's out there. And as AI evolves, highly

Speaker 6

architectures

Speaker 2

are genuinely needed in order to scale with the data sets and the complexity of the problems that are being asked for. So at Intel, we believe our architecture can solve for these larger models that offer consistent from the edge of the device down at the edges all the way through to the data center. Intel is committed to delivering really the next generation of AI and through products and experiences. To deliver our AI vision, we've done a whole range of internal innovations. And you see that in our Xeon products, our Xeon PHYs, all the way down through our atom based products and our quark based products, all the way through that product set, you see innovation and you see AI intrinsic in all of those.

There's various different types of learning engines in each one of those products. But in addition to that, we've had a series of acquisitions that have been targeted towards broadening and providing specific solutions towards this AI environment. And the first one I want to talk about is Saffron. Saffron is a leading AI as a service provider organization. Has become became part of the Intel team about a year ago and they're providing AI solutions to many sets of industries the financial industry, investment industry.

The next one is Movidius. Movidius is unique in that it's a pending acquisition as of today, but it is a leader in what's called computer vision. So think about it as AI applied to when computers are out looking at the world and understanding just what is it that it's seeing, being able to discern between a person, a cup, a dog, anything. That AI is what Mobidius is doing. And then finally, it's Nirvana.

Nirvana is really the top end here, as a premier leader in deep learning and neural networks. And Nirvana joined us this year. And this is really designed to bring us to the very top of the performance around deep learning and machine learning in this kind of an environment. And since the acquisition, we've been very active in integrating and bringing Nirvana in and thought rather than just listen to me this morning and to tell us a little bit more, thought rather than just listen to me this morning and to tell us a little bit more, we'd bring Nirvana CEO, Naveen Rao, on stage here and have him come and talk to us a little bit about what Nirvana does. So Naveen?

Thanks, Brian. Thanks. So first, welcome to Intel, right?

Speaker 6

I mean, you've

Speaker 2

been here about a quarter or so, a little bit more?

Speaker 7

Yes, about 3 months now almost.

Speaker 2

Yes. So just welcome to AI Day. So maybe if you could tell us just a little bit about, you know, what does Nirvana do and exactly what have you been trying to do in Nirvana from a product perspective?

Speaker 7

Yeah. So artificial intelligence has really been understanding artificial intelligence has been a quest of humanity for a long time. Combination of biological inspiration with computer engineering, what we know how to build in silicon systems, has been a Holy Grail really for computing. And I think we're actually at an inflection point where we understand enough about silicon, we understand enough about concept of the brain, we can bring these things together and really take computers in a

Speaker 2

new direction. So that's something different about you as compared to a lot of the other data scientists out here. And that's you talk a lot about the brain and how the brain is, is you're trying to figure out how the brain gets to the computer, and how the computer becomes more like a brain. That's something you have a real passion about, right?

Speaker 7

Absolutely. So my background is, I spent 10 years in Silicon Valley designing chips and then quit my job to get a PhD in neuroscience

Speaker 5

to

Speaker 7

try to get a hint on that answer to that question. And I think we've actually hit on something that's really interesting, that we can actually take advantage of properties of silicon by using some inspiration from the brain, not at very low levels of mimicry, but at a higher level and actually move computer architecture into the next wave.

Speaker 2

So how are you guys doing? Tell us a little bit about specifically around deep learning. What's next from nirvana in deep learning?

Speaker 7

Well, so we're going to be talking a lot more about that in the deep dive talk. Intel is committed and ready to invest in many different levels in the technology. It's going to be integrated across the entire portfolio. So today, we are going to be talking about our chip technology that we'll be bringing to the market next year, as well as integrations of this into various product lines. I think it's going to touch everything.

It's extremely exciting for me.

Speaker 2

And what you bring is both hardware and software, correct?

Speaker 7

That's right. So in addition to the hardware roadmap, we're going to be displaying our unified strategy across software to make to bring together a lot of things that have been happening in industry and make them work well across all of Intel's architectures.

Speaker 2

Okay. Thanks a lot. I think the audience knows a little bit more about Nirvana. I think hope everybody attends your deep dive. And thanks a lot for coming on stage.

Thanks, Brian. Thanks, Amit. The goal of all these acquisitions is to really make AI pervasive across all of our products and our software suite. And you could kind of hear that from Naveen as he described not only how he's going to bring a silicon to market, but how he's going to work with us on producing some of those same kinds of intellectual pieces of property in a lot of our products and the software as well. And then let's go to the next step, which is how do we make AI accelerated?

How do we really push it across the boundaries and into almost everything we go and do? And let's talk about that for a little bit. One way we're accelerating AI is through our product portfolio as we've just talked about. What I'm excited to announce today is that the Nirvana brand, which will include both hardware and software, is becoming a part of the Intel branding network. And it's built for speed and ease of use with the foundation of a set of a highly optimized set of AI solutions that enable more data and more data professionals to solve the world's biggest challenges faster.

So you'll see the Nirvana brand now used extensively across our deep learning and machine learning solution sets. And Intel is committed to leading, we believe, the AI computing era, and we have a threefold approach to doing so. So let's walk through that. 1st, we want to drive the AI computing era. And we're going to compress the innovation cycles through products like Nirvana's products, but all of the products are Xeon, our Xeon PHY.

You heard about saffron. We have many more products, like I said, from Xeon all the way down through Quark that we're compressing all of their innovation cycles to drive AI across the board. It's through breakthroughs through data ingestion and understanding how data can best enter the compute model and building training and deployment models that really are in the software side of this that really allow the compute to go through and understand the decisions and the environment and make the artificial intelligence really come to life. Diane is going to provide you a lot more details of that and examples on how we intend to drive this through this space. And so she'll be up next and she'll talk about this.

2nd, we want to democratize access to AI. Intel is leading the charge for open data and data exchanges and initiatives and really make it easy to use tools across the board. And we're going to try and train more and more talent and really broaden availability of people and skill sets to this. Doug Fisher will provide you a little bit later with a deeper dive during his keynote around these tools. And finally, we want to be the trusted guide for AI within the industry.

Intel has a long history of really being the trusted supplier for compute, for data across the industry. And we want to maximize that and maximize the net positive impact of artificial intelligence to our world. So we're engaging with governments, business and society throughout the area to plan for this coming really transformation. AI will generally transform most industries that we touch today. And we believe there's a chance to make that transformation positive to provide insights and abilities to humans that never had before.

And we want to be the trusted leader and developer of that. You'll hear from several other speakers throughout the day. And I think, as I said at the beginning, Diane has and the team have put together really a great day here to really not only understand what is artificial intelligence, but how does it imply to our products? How does it how do you really think about the software example of The last thing I want to leave you with today is just one example of how we're looking at

Speaker 8

a problem

Speaker 2

holistically and solving it with artificial intelligence. It's no secret that online and the online world has brought society numerous benefits. I think we all think about those every day. But along with those benefits have come challenges. And probably many of you on this audience have been witness to or sometimes victim of some form of harassment in this online world.

And if you have kids and especially teenage kids, you've probably witnessed it for sure down at that level. At Intel, our goal is to power the smart connected world. And if that world isn't inclusive, if it's not safe for people to go into, we'll never reach our full potential. The business will be stalled. And you can see hints of that today.

We know we need to bring the industry together to stop online harassment. So about a year ago, we brought together a team of partners to form a group called Hack Harassment. And this is an amazing opportunity to use technology, specifically artificial intelligence, as a tool to address harassment. And using the power of artificial intelligence, we're working together on building an open source classifier that will detect and help and deter online harassment. Think of a tool that before you post looks at your post and says, that could be harassment.

That could be harmful to people who receive this. Are you sure you want to send it, right? Just that little comment and that understanding could prevent somebody from doing something that they probably don't really want to do in the long run. So I invite you to take advantage of talking to our experts today who are speaking here and really can provide a deeper and more complete understanding of how Intel is advancing artificial intelligence. I'm really excited to have been the ones who get to open up our first artificial intelligence day.

And I'm really excited about what artificial intelligence is going to bring to Intel, but really about what it's going to do for the industry. So to talk more about this and to really get me off the stage, I'm going to invite Diane Bryant, Executive Vice President of the Data Center Group, on stage to talk to you more. Thanks, Diane.

Speaker 9

Welcome to stay on the stage. Okay. So welcome to all of you and really thank you for joining us. This is a big opportunity for us to engage with you and to tell you about our plans around artificial intelligence. So in preparing for today's talk, I fell upon this quote from Intel's founder, Bob Noyce.

And he said that up until now, we've been trying to understand the brain by using computer modeling. Going forward, if we're going to better understand the future of the computer, perhaps we should start looking at the brain for some clues, very consistent with what Navin just said. That was 32 years ago. We have made tremendous progress in artificial intelligence over that time. However, much remains to be done.

The opportunities remain immense. And a very tangible example is the good old cat example that you all love. If you were to show a 3 year old child 4 or 5 pictures of a cat, on the 6th picture, they'll say cat. Now, you can contrast that with the most sophisticated image resolution processing system you have today and you need to analyze a 1000000 different images of a cat, if you're going to get a 97% probability that the next cat will actually be identified as a cat. So we still have a very long ways to go in getting to that elusive position of where we can replicate the human brain.

However, despite that big gap, we have seen very large step functions in innovation before in our industry. And Intel is known for creating and leading and driving these big step functions in technology innovation. If you look back, we did it in the move from mainframes, the old mainframe era, to standards based servers. We've done it again in moving from standard based servers to cloud computing. And now artificial intelligence is going to drive that next big wave of compute that will, as VK said, will truly, we believe, transform the way all businesses operate and how people fundamentally engage with the world.

So it is the fastest growing workload in the data center. By 2020, there will be more servers running data analytics than any other workload. And we predict that the amount of compute cycles running AI will grow by 12x between now and then, just 4 years out. That's growing at twice the rate of the overall compute market. One of the biggest innovators in the area of AI and clearly one of the largest consumers of compute capacity is Google.

And we are very fortunate to have had a very long and fruitful relationship, technology relationship with Google. And I'm excited today to share the news of a new collaboration between Intel and Google that will help shape the next wave of data center innovation across the cloud and across AI. So please join me in welcoming a very fabulous leader and friend, Diane Greene, the Google Cloud's Senior Vice President. Hey, Diane. It's nice to see you.

We just like it because we're both named Diane. We just like to hang out together. So no, but truly, thank you. Thank you for coming.

Speaker 10

Oh, gosh, it's my pleasure. Thanks so much. And it was so great to announce that strategic partnership today across we've had this long standing relationship between Google and Intel around the technology, accelerating things. And now to be bringing it to the business side and the As we sort of join together in our focus on secure systems, open systems and we both believe in multi cloud.

Speaker 9

Yes, it is great. So we've got a formalization of a very strategic alliance that spans both technology and business. It spans the move to the cloud as well as TensorFlow and improvements in AI and really a comprehensive agreement. So thank you for that.

Speaker 10

Yes. I mean, I think that as we work together in the cloud, I mean, there's on prem cloud, there is multiple public clouds and we can work together to make it so that people can use them all very easily. And we have Google, as an example, did Kubernetes, which we open sourced, which gives you orchestration across all these clouds. And then we've enjoyed a collaboration with you guys to optimize it for the Intel architecture, improve virtual networking, prioritization of resource models, some real technologies there and delivering these code optimization. And then also, we've been working with you on security, of course, all important to the enterprise and deep collaboration around making Intel hardware and Google Cloud support a more secure world for us.

Speaker 9

Yes. And the direct optimizations that we've been doing to make sure Kubernetes, the open source solution that you obviously invented and now have released to the world. Making that run best on IA, it helps the broad community, but we truly do believe it's going to help enterprises deploy that next generation developed for the cloud type of solutions like artificial intelligence. So it's a big opportunity and nice investment, nice collaboration with you guys.

Speaker 10

Thanks. And then other one we're collaborating on and completely in tune with your day to day is TensorFlow. Which is Google's library Google Brain developed this library for machine learning and we've open sourced it. And now we're collaborating with you to provide accelerations with the Intel platforms over a range of neural network models, so that it can work for both training and inferencing. And those optimization should make it really perform well on the Intel architecture.

Thank you very much. And then finally, some integrations around IoT, Internet of Things to enable the enterprise customers to be secure at the edge and deliver that.

Speaker 9

Yes. And the work we're doing together to make sure that TensorFlow really runs best on Intel architecture and really have that a highly tuned machine. We've committed to releasing the Intel architecture optimization system to TensorFlow The team is committed to getting it done by the end of the year, but just looking at my watch, there's not a lot of time. They may be working through the holidays on this one. But no challenge too great for the Google and Intel engineers.

I'm sure they'll get that done. If not by December 31, maybe January 1.

Speaker 10

Well, thanks. I know how you're spending your holiday.

Speaker 9

I'll be talking to you.

Speaker 10

So but we have had the 2 companies, Diane's company and Diane's company.

Speaker 9

There you go.

Speaker 10

No, but we've had this long standing relationship and I plus accelerate bringing these new technologies to market much faster, the joint engineering, the joint validation efforts over the years and looking forward as we're developing for next generation And then as we expand our engineering, now I'm like really happy to be going to market and expanding our enterprise business relationship, bringing cloud to all companies.

Speaker 5

All companies

Speaker 9

on the enterprise. Yes, we're all cloud and we're happy to help you. And it has been great working together. And we obviously share a common vision around the democratization of technology through Open. And it's a real pleasure to have

Speaker 10

you here. It's a pleasure to

Speaker 9

have the strategic alliance, and thank you so much. Yes.

Speaker 10

Thank you, Diana.

Speaker 9

So when we work with big industry partners and thought leaders like Google, they challenge us and we are better for it. And in that same light, as we look at the area of artificial intelligence, the industry is still in a period of discovery, right? The pace of innovation in AI is massive. And so for us to remain deeply connected and informed of all the new developments occurring, we are creating the Intel Nirvana AI Board of Advisors. It's made up of industry recognized academics and industry leaders, thought leaders.

And through the advisement and collaboration of these folks with the Intel Fellows and Intel Principal Engineers, the Board is going to shape our future R and D strategy across hardware and deeply committed to changing the nature of computation. And so if AI happens to be your domain of expertise, you likely know some of these folks and we're certainly excited to be partnering with them. I personally passionately believe that Intel is positioned, best positioned to support artificial intelligence and the revolution that comes with it. I guess you'd be surprised if I said I wasn't passionately a believer about that. But if you look as just a starting point, a data point to start that discussion, our estimate is that 2016 Intel processors will power more than 94% of all the servers deployed for artificial intelligence workloads, delivering the complete compute solution, no GPUs, 94%.

However, we know that if we're going to continue to lead and drive in a field like AI that is continually evolving, we need to be matching that pace of innovation, that very rapid pace of innovation that we all see happening in the It means all It means all implementations, techniques, all algorithms from deep learning and the various forms of neural networking that Navin was speaking of to the many associative and statistical algorithms, everything. All methods of AI will run best on Intel architecture. AI on IA, as we say. So there are clearly then many different approaches to artificial intelligence, but what they do all have in common is scale. So the more data you can compute, the more accurate the model will be.

The more compute capacity you deliver to the solution, the faster the model is going to train and the more real time results you're going to get from it. So we provide a full portfolio of products that span all artificial intelligence implementation. What we deliver is 1 architecture, from training to scoring, from development to deployment, from general purpose to highly targeted algorithms. So let's start with the Xeon family. So the Xeon E5 is the most widely deployed processor for artificial intelligence and machine learning.

With each generation of Xeon, we have been optimizing Apache Spark to run on IABest. Apache Spark is the most commonly used machine learning platform today. Over the last 3 years, we've delivered a staggering 18x improvement in performance. Beyond Xeon, we enable customers to further optimize their AI solutions with the integration of a FPGA, Field Programmable Gate Array with the Xeon Processor. So that addition of the FPGA allows for very low latency, which is critically valuable in these real time inference solutions.

And it also provides some flexibility in the precision that is used. So you get a very efficient a highly efficient compute environment. And I am very happy to announce today that we are shipping Skylake. Skylake is our next generation Xeon Processor. It is a preliminary production version of Skylake.

It has a targeted feature set, targeted to the largest cloud service providers and for their use as well as some high performance computing end users. And with Skylake, we just continue that beat rate of innovation. And one of the features that's included in Skylake that is targeted for artificial intelligence is the new instruction set in feature AVX512, yeah, 512, AVX512, which is advanced vector acceleration. So it's a new instruction and that floating point feature is going to allow a significant acceleration for inference. So all the OEMs today are currently developing full featured high performing systems that are across the Skylake product line, delivering the solution for the broad range of customer segments, the broad range of workloads and they'll be ready when Skylake as the formal portfolio launches in the middle of next year.

Next in the AI portfolio is of Xeon Phi. Scalability is key for machine learning and Xeon Phi is inherently a scalable system, a scalable architecture. We recently launched our 2nd generation of Xeon Phi, which was Knights Landing. It delivers 31x reduction in the time to train when scaling to 32 nodes. So it's very impressive, near linear performance scaling.

Xeon PHY delivers a 50% increase in deep learning training over the standard Xeon processor family, so a significant pop in performance. And it also provides access direct access to 400 gigabytes of memory. So huge memory space, which is incredibly important in AI solutions where the size of the data set really matters. If you contrast that 400 gigabytes of memory, the best in class GPU today is limited to 16 gigabytes, so dramatic difference. Xeon 5 is also a bootable processor, so you don't have to deal with the constraints and complexities that come with an offload architecture like a GPU.

And with and then after that, we come to our next generation of Xeon 5, which is Knight's Mill. And I actually announced this product at Intel's developer form a couple of months back. But Knight's Mill is going to further optimize for deep learning training. So with that processor, we deliver single precision as well as half precision support, and that's going to deliver a 4x improvement in deep learning training. So, amazing results and we look forward to launching this product next year.

So given also that AI solutions are increasingly running on high performance computing clusters, so the scale value proposition, We're very proud of the top 500 supercomputing results that just came out this week. So Xeon PHY comprised over 80% of all the new system accelerator

Speaker 11

flops. So if you

Speaker 9

look at all the co processors and GPU flops, we were over 80% of those new flops. So we're making tremendous, tremendous gains in results with the Xeon Phi product line. If you look specifically into deep learning, it is still a small, but emerging portion of the overall AI world. To give you some representation of the size of the market, 0.1% of all servers in 2015 were dedicated to deep learning training. So a very small number.

But we all know it is a space of immense investment and innovation. It is a space that is rapidly growing and Intel is committed to delivering the significant advancements that the industry needs. Before the end of the decade, the Intel Nirvana platform 100x. 100x. That's our commitment by 2020.

So our and I'm looking at Naveen, that's your commitment by 2020. He's acting like he's just hearing this for the first time. Our first step towards this is to actually launch the new product that we've code named Lake Crest. Lake Crest is based on that great technology that we acquired from the Nirvana company.

Speaker 5

First silicon

Speaker 9

of Lake Crest comes in the first half of twenty seventeen, so just around the corner. Lake Crest will deliver the best in class neural network performance at launch. And I'm also happy to announce that in addition, we have added to our roadmap Knightcrest. And Knightcrest will create a very tightly integrated solution between our best in class Xeon processor with our best in class deep learning engine. Those 2 come together into an integrated solution.

Aside from silicon, as BK noted, we have other AI assets and a critical one being Intel's Saffron Technology. We acquired Saffron Company last year and their CEO, Gail Shepherd. She continues to lead Saffron. She's right here for you. So if want to chat with her afterwards, you'll know who she is.

Saffron's software platform enables companies to identify critical insights in a very timely and effective and efficient manner. And it does that employing a memory based reasoning system. And a great example of the impact that artificial intelligence and saffron has had in a particular business result is USAA. So USAA, if you don't know, they're a financial institution. They provide insurance, banking, investments, retirement products and advice.

They are owned by their nearly 12,000,000 members and those members are exclusively current and former U. S. Military and their families. Running the data science Research and Development of UAA is Robert Welborn. His areas of focus are on machine learning and high velocity data problems.

Please join me in introducing Robert to the stage. Robert? Thank you for coming.

Speaker 12

Thanks, Anne. Thank you

Speaker 5

for having us.

Speaker 9

Great to see you. Yes. So as a financial services firm, you're obviously swimming in data. It's a very data rich environment.

Speaker 7

Very much so.

Speaker 10

So why

Speaker 9

don't you jump right in and tell us about some of the AI implementations you've done, obviously, along with Saffron. What was the problem you were trying to solve?

Speaker 12

Sure. So at USAA, we are obviously very focused on ensuring the safety of our members' assets and on spending our members, our members own our company, spending their money very wisely and not wasting any of it. And so we spend a lot of time trying to understand and detect fraud. And if you think about it, in the United States in the last 12 months, auto insurance companies have paid off nearly $1,000,000,000,000 in auto insurance claims. Industry estimates put 10% to 20% of that $1,000,000,000,000 as being fraudulent claims.

And so USAA pays 1,000,000,000 every year in claims to completely valid members. And we do our best to make sure that we don't spend a penny on fraud, however, in their system's perfect. And so we thought we were going to we first took out the investment in Safran with the intent of improving our rigorous fraud investigations and looking at your memory based reasoning systems to augment our current workflow to drive complexity out of the system as opposed to adding additional business rules and models. So within 10 hours of setting the first instance up, we found our first fraud ring. This ring was bilking our members out of $2,000,000 every year.

And so we were extremely pleased to be stopping that.

Speaker 9

Wow, that's a quick result.

Speaker 2

It was.

Speaker 12

10 hours was good. And if you think about it, the part of the thing is we would never have found that had it not been for Saffron's unique holistic graph that essentially looks at the associations between extremely heterogeneous data sets.

Speaker 9

That is it is very impressive. And an attribute of saffron is that you're continually getting data real time. So that data is continually being refreshed, which is critically important when you're in the process of continually trying to outsmart the malicious hackers, right? That real time data and refreshing of data, your model never ages out. It's a great attribute, Safran.

So congratulations on your fabulous results. So I heard you also leverage the technology not just for fraud detection, but also for customer service or customer experience?

Speaker 12

Absolutely. So at USAA, we're consistently member service champs. We turn to Safran to understand why is it that members are calling us, logging on to the website or connecting to the mobile app. And just to give you an idea, we have 160 products. We have a lot of financial services products.

And we offer 8,000 distinct services that are attached to that. So that's like 160 factorial times 8,000 factorial.

Speaker 9

That's a lot.

Speaker 12

Yeah. It's more than any possible logistic regression is ever going to be able to do. To give you an idea, if I took if I hired a 1,000 developers right now and they started building a model every second, we would outlast the life of the universe by about 3x. So rather than waiting for a second universe to cool

Speaker 9

and You thought you'd use saffron.

Speaker 12

Yeah. So in about 10 weeks, we got a 1st set of results, which was much better than

Speaker 5

seeing whether or not humanity evolved again.

Speaker 12

And, you know, so with our very first iteration of models, which is actually still impressive, we're still hitting about a 50% rate of guessing what people were going to do next. But saffron improved the rate to 88%. Wow.

Speaker 13

And so

Speaker 12

it was a 70% improvement in 10 weeks.

Speaker 9

Wow, that's pretty impressive.

Speaker 12

So essentially, we view Saffron so far to automate some of the current analysis that we're already doing and considerably step it up, as well as taking some of the superhuman analysis that mere mortals could not possibly do.

Speaker 9

That's impressive. So fraud detection, you got checked out box and customer satisfaction, what's next? What are you looking to do in the AI space next?

Speaker 12

So we're looking to tackle cybersecurity and phishing next. Not phishing. No trout.

Speaker 9

With a ph ph ph, no trout. Okay. Well, thank you so much for joining us. It's very impressive. Appreciate it, Robert.

So AI also, as BK talked about in the hacker harassment space, AI plays a critical role in delivering societal benefit. And we clearly are passionate about that. And I'm personally passionate and fortunate to be in working in a space where I can actually have impact in an area of societal benefit, which is the marrying of technology to the health care industry. Many of you have heard before in various talks that we've done about the investments we're making with Michael J. Fox Foundation and the use of patient analytics for better understanding of Parkinson's disease.

We also have our mission around the Collaborative Cancer Cloud, looking to deliver personalized cancer treatment through distributed analytics of genome sequences, working with OHSU and other partners. We're also working with the National Cancer Institute and the Department of Energy to apply high performance computing for drug discovery. And we're also working with Penn Med on machine learning solutions that identify patients that are at high risk for hospital readmission. And I am also happy today to be announcing a new partnership with the Broad Institute of MIT and Harvard. Broad, if you didn't know, they hold the world's largest number of full genome sequences worldwide.

And our objective together is to optimize the hardware and software solution for genome analytics. And the goal here is clearly to create new solutions that will promote biomedical discoveries. So let's hear from Anthony, their Chief Data Officer of Broad.

Speaker 14

At the Broad Institute of MIT and Harvard, we use a collaborative approach to biomedical and genomic research in order to improve human health and advance the understanding and treatment of human disease. Every 8 months, the size of genomic data sets doubles. This makes scalability a big problem. The data is often siloed by institution or infrastructure. This makes sharing very challenging.

So when you look at the next decade or so of both genomics and healthcare at large, we're going to see that the great breakthroughs are going to come from applying artificial intelligence to problems in health care. I am truly thrilled to announce today the creation of a new Intel Broad Center for Genomic Data Engineering. This represents a 5 year investment towards developing tools for processing genomic data, along with reference architectures for deploying them across multiple institutions. We hope to create a model for other industries to break down barriers and speed up research and discovery that relies on complex and distributed data sets. Broad and Intel have already done amazing work together.

For example, in a recent project, we reduced the time it took to genotype a 20,000 sample cohort to less than onetenth of the time it took previously. Our continued work together has the potential to help researchers everywhere drive new insights that ultimately can improve human health. I want to sincerely thank Intel for its partnership in support of this groundbreaking work. We look forward to a deep and productive collaboration in the years ahead.

Speaker 9

So not often discussed as another area that's critical for the application of technology and in particular artificial intelligence is in the trafficking of children, a serious issue facing our country. In 2015 alone over 460,000 children were reported missing in the U. S. The National Center For Missing and Exploited Children or NCMEC was established by Congress to address this horrific reality. And with me to talk about our work together to apply artificial intelligence to this space is the CEO of NCMEC, John Clark.

John? Hi. Thank you for coming. I really appreciate you being here with us. So why don't you tell us a little bit about NCMEC?

Speaker 15

Thank you, Diane. Good afternoon, everyone. It's a pleasure and honor to be here. I was listening in on the wings of all the great things that Intel is doing to make the world a better place with technology. So the National Center For Missing and Exploited Children is a small non profit organization based in Alexandria, Virginia.

And every day, about 3 30 dedicated cells come to work every day with a single mission, try to find missing children and stop child exploitation. And the organization was established in 1984 by John and Reve Walsh, our co founders, John Walsh from America's Most Wanted fame, and he really designed it as a clearinghouse for families and victims and other folks affected by the issues of missing children and child exploitation. And next month will actually mark my 1st year on the job as CEO, very proud of that. And I had the good fortune of working with another historic and legendary organization, the U. S.

Marshals Service, prior to coming to this organization. And through that relationship, I watched the center really develop and grow into this great single mission here, finding children and stopping exploitation. So today, we've recovered over 230,000 missing children. We're very proud of that. And in fact, more children are coming home today than ever before.

So again, hard work and dedication, but that's what gets it done.

Speaker 9

It is remarkable. And technology has played a substantial role, both on the negative, unfortunately, and then, of course, the positive on the negative that make it easier to exploit children, but then also using technology to help rescue them. So maybe you could tell us a little bit about how technology has played in and what you're seeing?

Speaker 15

Sure, sure. Well, threats against children have evolved over the years and the Internet's paved the way really for the explosion of what we're seeing now in child pornography, online exploitation, somewhat benign terms really for actual violent sexual exploitation of children. Again, hard to believe, but that happens. So in response, the National Center For Mission and Exploited Children created a cyber tip line, which began in 1998, so people would have a place to actually report these types of things. And in a recent FBI operation conducted just a couple of months ago, they recovered about 82 children involved in the online sex trafficking situation, the youngest child there being actually just 13 years old.

So in 2013, the volume of our tip line increased to 500,000 reports. And right now, just another about a month ago, we're at our 7,000,000 report mark. So that's a significant increase in that volume on the cyber TIF line. So in order for the National Center to respond faster and be better prepared, we obviously need to leverage technology. And that's one of the great things we're doing with Intel.

Speaker 9

Yes, that is a dramatic uptick in reports. And certainly, the cloud service providers have become a large supporter as they are voluntarily taking action to ensure that they are not hosting and promoting child pornography. So that's a big driver of some of that uptick. So what is your biggest challenge then with all these new tips coming in? What's your biggest challenge?

Speaker 15

Well, despite advances in technology, we still, rather sadly, I would say, in our modern era, have a lot of stovepipes and a lot of manual use of that kind of of analysis on the material we're getting through the cyber tip line. So it's still kind of a cumbersome process that we're using. So analysts have to personally review all the data and you can just imagine what that volume would that would take. So artificial intelligence and specifically Intel's innovation in this particular work area are key to address this issue of going from a manual system really to one that's using technology to enable it. And that's going to be ongoing here over the course of time.

But by leveraging this particular artificial intelligence, our goal is to more quickly and efficiently process these reports. And an interesting thing that Intel has helped us do or will be helping us do is to decrease the amount of time it takes for us, which right now in some cases may take up to 30 days to actually go through that kind of volume with Intel's help. And many of those cases, we'll be able to get it down to a day or 2. So just imagine that dramatic change.

Speaker 9

Yes, amazing response. So our commitment through our partnership is that we will achieve this reduction from tip to action tipping reported to action being taken, whatever that action is, from 30 days down to one day, and we will do that in less than a year. That's our commitment to you. And so we're happy to be working on that very important project. And so, why don't you tell us what additional opportunities should we be working on or what more can be done?

What are you looking at?

Speaker 15

Well, today about 94% of the reports we get on the cyber tip line are originating outside the United States. And we think with artificial intelligence, we can enhance our global response, being able to triage those types of reports, particularly issues involving child pornography and child sex trafficking. Artificial intelligence is going to help us really find what we might call the needles and the pile of needles. And process, we believe, up to 20 times faster that data again that we're always looking at to help us protect children even in a better manner. The National Center, we're also inspired by the work that's being done by others in this field, but Intel is certainly taking the lead.

Speaker 9

Yes. So thank you so much, Rupinger. Thank you for the partnership. We certainly realize what an important investment this is and what a great application of artificial intelligence, and we're thrilled to be working with you. Thank you.

Speaker 5

It's a

Speaker 15

great honor, and thank you as well for all the help that Insoil is doing.

Speaker 9

Thank you. So realizing the potential of artificial intelligence, I hope you now clearly agree is a statement of when, not if. And we are investing here at Intel to deliver breakthrough performance, to deliver societal benefits, and to democratize the access. And we are also investing to make you all look great. So we are don't when you leave today, don't forget to pick up your hoodie, very cool hoodie here.

But specifically now to talk more about the efforts we are making in the democratization of technology and making AI solutions more available, let me introduce to you Doug Fisher, Senior Vice President of Software and Solutions Group.

Speaker 5

Great. Thank you. Here you go.

Speaker 16

Good afternoon. Being a software guy should be wearing that hoodie up here. So today, I'm going to talk to you a lot about what we're doing to democratize artificial intelligence. And as I was thinking about what I was going to speak about, a thread came through, a theme that came shining through on what I was going to talk about, and it's called upstreaming. What we're doing in artificial intelligence and modernization is upstreaming the work we do.

How many people know what upstreaming is and have a definition?

Speaker 17

How many people from

Speaker 8

a big city, like New

Speaker 16

York, L. A, San Francisco? Yeah, it's not that same upstreaming. It's not the upstreaming where you're standing on a curb and there's 3 people waiting for the same cab and you go up one block to get the cab before they do. That's not the upstreaming I'm talking about.

What I'm talking about is getting closer to the source, engaging a lot earlier. This sounds a lot like the work we've done for years in open source software, being applied that same technique in artificial intelligence, moving upstream. When you move upstream, you concentrate the effort, you accelerate the innovation and then the rest of the world benefits, making it pervasive or as I say democratizing it, making artificial intelligence available for more people to innovate and build solutions like Diane just spoke about. That's what we're doing. And in order to do that, we have to unleash the value.

Diane talked a lot about the great hardware platforms we're building. We need to unleash that value. We need to make sure that whatever we build, whatever solutions are being built, the value comes through. We also need to harness that knowledge. We need to make certain that the next generation of innovators have that knowledge to scale those solutions to the broader market.

And so in order to unleash the value of artificial intelligence, in order to unleash the full potential of artificial intelligence, we are unleashing the value of IA. That is what we're doing. So let me talk about that. Now Diane showed the roadmap of amazing hardware platforms from Xeon, Xeon PHY to the new Crest line she talked about to our FPGAs, all that marvelous hardware platforms purposely designed for specific AI workloads. We need to build capabilities on top of that.

She talked about how prevalent we are in machine learning and we have libraries of primitives that we build in software to take full advantage of that hardware. These primitives accelerate the capabilities on our platform. Building these primitives accelerate the algorithms on top of that. We're extending our math kernel library to add deep neural network capabilities, to accelerate those primitives, to ensure that whatever is built on top of that is taken full advantage of whatever Intel platform is below that. This is the value we can add to accelerate the performance that you so need to solve all those problems just discussed.

On top of that will be the deep learning frameworks. And those deep learning frameworks need to be optimized and accelerated and take full advantage of those deep neural network primitives. And so we'll engage with the most prevalent and important frameworks to our industry to ensure they're fully optimized, so that when a data scientist engaged in that framework, utilizing Intel Architecture and Intel Platforms They're getting the most value. We're unleashing that value, and they're getting the most benefit out of that platform. So that's why we engage in all these frameworks.

Let me talk about a few we engage with. The first one you've heard about Nirvana, Nirvana, Nirvana, we're going to engage with to ensure that we optimize that framework for our platform, ensuring the performance of that is second to none. And we're using it as a space where we can drive more and more innovation for the market. An example of that is our Intel Nirvana graph compiler. And if you're a data scientist, the complication of going deeper and deeper technologies in more and more layers and hand tuning that is very difficult for data scientists to do.

And with this graph compiler, it helps simplify the ability to have more and more complex models, deeper and deeper layers, so they can build more sophisticated models. And so we're innovating in this space with this graph compiler, and we're going to open source that. It's going to be released the 1st of this year 2017 and Neon's Release 3.0. But it will also be open source for other frameworks to take advantage of it. So we're going to accelerate technologies in Neon and allow the rest of the world to benefit as well in all the other frameworks.

We're going to work with those other frameworks to ensure we take those technologies and infuse them in multiple frameworks. TensorFlow, well, you heard Diane's, both Diane's talk about TensorFlow and the work we're doing there to drive the performance optimization in TensorFlow. It's a key pillar to the collaboration alliance we just talked about on stage a moment ago is that optimization that we committed that I'll be working over the Christmas holiday to get done. So it's always good to go first. But we're going to ensure that we deliver all those optimizations at the end of the year and then deliver this to the masses to take advantage of in 2017.

Tuning and optimizing on our Xeon and Xeon Phi platforms. The early work that they talked about is already showing fruits of our labors. We're already seeing tremendous performance improvements and engaging with Google and optimizing those frameworks, taking advantage of those software layers that I described. Another framework is Theano. And Theano, we're optimizing very similar to the other frameworks.

And a specific example where we engage with Kyoto University out of Japan, Kyoto University out of Japan, and they're looking to figure out a way to use machine learning and deep learning to determine what drugs will be most effective for solving challenges they have in their society. And they analyze that and predict the drugs and the combination of drugs will be most effective. And by engaging with them, we were able to improve their performance on our Xeon platform and the optimizations we've done in the framework by 8x, increasing their accuracy of their model to 98.1%. That's an amazing result taking advantage of the capabilities we have in our platform, the optimizations we have in our software and delivering value in their model. Caffe is another example where we worked with a company in China called the Cloud.

The Cloud is an online the largest online video provider in China. And their job is to consistently look for illegal video. And they take pictures and run it against the model and use deep learning, they analyze the pictures to determine if that video being displayed is illegal. And so what they do is then pull it off, and they were having trouble meeting the demands of all the videos. As you know, the proliferation of videos is growing dramatically.

And so we went in and using our Xeon platform, we're able to increase performance by 30x when we optimize the CAFE framework for our platform. So this is the kind of work we're doing to unleash that value of Intel architecture, engaging with these frameworks and really delivering value and performance on our platform. The final one I want to talk about is Torch. Torch is another framework where we engage with a company that started 2015. I met this company at IDF.

Company is called Picazo. And this very innovative thing with their technology and we started engaging with them. I participated in this application that they developed. But I learned what was behind it. It was so fascinating to me.

I wanted you all to hear about what they're doing. So instead of me talking about it, I thought I'd invite the CEO of Vacasso up on stage to talk about what he's doing, what his application is doing and how he used Intel to deliver a better experience. So let me invite up on stage Noah Rosenberg, CEO of Ocasso. Hey, Noah, how are you doing? Thanks, Luke.

So we talked earlier, we talked yesterday about this, I met you at IDF. I'm very fascinated with what you're doing. It does an amazing thing, which turns individuals like me into artists with a couple of clicks. So why don't you talk about what you're doing?

Speaker 18

That's great. And thanks for the I think that's a great way of describing it. Picazo is an app that lets anybody create art with just 2 taps. And maybe instead of talking about it, I thought we could show it. Doug, do you mind if we use the app on you?

Speaker 1

All right. So

Speaker 18

all we're going to do is take a quick photo. Alright. And now we ship Picasa with hundreds of styles, but the coolest thing is that you can bring in any art. And I've been staring at this background all day, so I thought wouldn't it be cool if we could paint you with this background? So I'm just going to take a photo of that.

And that's what's so cool about Picazo is that it lets you become an artist by combining art that you see in the world around you.

Speaker 16

And so now you're actually compiling that. Send that So here's some examples of what you've done before. So why don't you talk a little bit more about what's behind what we see on the screen here?

Speaker 18

Yes. So in this example, the way what Picazo does is, you can see how we've given it an image of the Golden Gate Bridge. And so our neural network powered by AI running on the Torch open source library is going to study that image and it can separate out and discover what area here is the bridge, what area here is the water, what are the clouds. And it does it by modeling the visual cortex. So it's really looking at it the same way that you or I would.

And then we ask it to look at the style. So in this case it's a Picasso painting. But when we're looking at the style, we say, where are the brushstrokes? Where are the colors? And while the neural network is sort of imagining or detecting those two images, it's able to mesh them.

So we ask you just move it make these pictures kind of similar. Now take a look, is that better? And each time the neural network redetects it and it goes through and says, okay, this is a little bit more a little bit more like both. And it processes that image over thousands of iterations. It takes a ton of computation.

In the end you get this beautiful artwork.

Speaker 16

So you engage with Intel. You already had a solution before that. You engage with Intel. Why? What was the challenge that you were coming?

Speaker 18

Yes. When we first got started, we launched exactly a year ago today. And at the beginning, it took us an hour to render a 1 megapixel image, because this is just such a new idea and it just required so much computation, but we thought, gosh, we can't wait to let people become artists. So we released it in the world and we just let you do a little thumbnail image. And people went crazy.

They got addicted to this thing and they were running it all day long. But our challenge became, people kept saying, how do

Speaker 16

I get my art out

Speaker 18

of the phone? I want it on my walls. For this to be really art it needs to get out into the world. So we were looking at the problem, we said, we're running on these GPGPU systems. And the challenge there is for our little thumbnail images those required 16 gigabytes of RAM.

So we were really trying to struggle about how are we going to get this bigger. We knew that on the CPU area if we could get our torch library optimized and that's where Intel came in. So your guys came in and helped work with our engineers to optimize us for the MKL framework so that the Intel platform could be processing these images. And we went from a 1 megapixel image in an hour to now we can do a 12 megapixel image in just 5 minutes. And anybody can do that right from their phone on our free app.

Speaker 16

That's fantastic. Maybe we have some other examples of work you did while you were So you're welcome to have your own image, Diane. And BK if you're here. So this is fantastic. So really what you had is a problem that required exactly what Diane has talked about is that limitation to 16 gigabytes.

You needed 2 terabytes of data, went from an hour to a small photo to actually canvases

Speaker 18

in minutes. Absolutely. And we're excited today to be able to announce that we can now go to unlimited resolution. So we've got an image that's 16 ks by 16 ks pixels, thanks to

Speaker 16

an operator. So how did I come out? Let's take a look.

Speaker 18

Blank.

Speaker 16

All right. So the screen's not working.

Speaker 5

Yeah.

Speaker 16

You get the idea. I'm up there. Okay. So you get the idea. If you go back, you can see what he did combining my photo with this image.

There it goes. So very cool stuff he's doing. What's really interesting is the computation behind that and the science behind building that algorithm and utilizing our performance capabilities.

Speaker 3

So thank you so much.

Speaker 18

Thank you

Speaker 5

so much. It's such

Speaker 16

a pleasure to be here. In addition to the frameworks I've talked about, Diane also talked about such the importance of all the data, the massive data sets that are coming at us and our need to continue to optimize Hadoop and Spark for in memory computation. She talked about the 18x performance improvement we've already shown by optimizing Spark. We're continuing to do things in Spark where we're announcing the availability at the end of 2016 of BigDL, which is integrating in deep learning framework techniques right into Spark, so that you can have the same developers who are utilizing Spark now have deep learning capabilities built into that, so the same programmers can take advantage of the environment they're already in. You heard earlier, this is an evolving space.

All sorts of new innovations are happening. That's an example of what's happening today, all these new innovations around this space. Now once a data scientist has decided which framework they're going to use, we want to make sure that we provide them that framework on our optimized framework, optimized MKL DNN and our amazing hardware platforms, and then get out of the way and let them do what they do great. And that's build these models. And so what we've built is our Intel deep learning SDK that I talked about in IDF.

It will now be in beta next month and released the 1st of next year in fold gold production. And what this does is allows a data scientist to deploy their environment, help them build the models, accelerate their ability to do what they do best and build the models and train that model. It then takes that trained model and then adds optimization techniques as well as weight quantification to compress it to allow it to be deployed for inference on our broad set of platforms. So this tool chain goes from end to end across all of our architectures that you heard Diane talk about today, allowing us to give those tools that the data scientists so need to accelerate their ability to innovate. And that's what we're doing to unleash upstreaming all this knowledge, upstreaming the work we're doing in frameworks, all open source, we're working upstream at the source of these investments.

All of this is happening upstream. But we also have to educate upstream. We can't just do all this work ourselves. We have to drive the education of what's going on. We have to drive knowledge upstream.

And that's what I'm so proud today to announce, as Brian talked about, the Nirvana brand is going to be across a lot of our assets, the Intel Nirvana IA Academy. This is going to be a one stop shop for all artificial intelligence developers and educators to participate. This is going to be a framework around we're putting programs to engage and educate and then scale. Consists of 3 main pillars. The first pillar being the Intel Developer Zone for artificial intelligence.

Those of you know our Developer Zone, we already touch over 20,000,000 developers. We have over 7,000 ISVs. We're rated as the number 2 in quality, a development environment by Evans Data. So it's a quality scale platform. We're going to take that scale and utilize it to engage with the artificial intelligence developers.

This is where they come to get those optimized frameworks, the tools, the libraries, everything I talked about, this is where they come to get that information and then participate and engage in that community. We're also going to engage with innovators and individual software developers who show a passion for this environment, show a passion to solve problems that advance this capability. We call these software innovators. And we find out who they are, we engage with them, we train them, we give them access to tools, access to training and then allow them to have a platform to communicate what they're doing. We get them into events, we allow them to demonstrate their capabilities.

And this helps us drive knowledge upstream, so they can downstream that to a broader set of people. And if you become very good through our crowdsourcing capabilities, a software innovator can then become a black belt. That's who the community recognizes not only an expert, but also is doing their part in growing and strengthening that community. And so they're recognized by their peers as a black belt. And you can find all this on the website below and get engaged there with software.

Intel.com/ai. And that's where all this community is going to be housed. But we want to go further upstream from the current developers and focus on not only developers of today, but developers of tomorrow because they're the data scientists of the future. So we're going upstream to students as well and focusing on them. And that's why we started a new program called the Intel Student Ambassadors Program, where we identify, support and recognize high achieving students in academia who are focused on this area, who are passionate about it, who are advancing this technology and want to drive it and advance it and educate others.

This is a community that's going to be the next generation of developers that are going to drive those capabilities on our Intel platform, our tools and our capabilities to the market. So with that, I'm very pleased to announce our first student Intel Ambassador is Dan Iter. Come on up, Dan. So Dan is from Stanford. It wasn't by design.

I've told the Stanford guy up here. He was selected as our first student. Tell us about the program.

Speaker 19

Yes, definitely. So yes, I'm Dan Eider. I'm a graduate student in computer science at Stanford. And I first got introduced to the student ambassador program. For my research, we were doing collaboration with Intel for parallel training of deep networks.

And I thought it was a great opportunity to get early access to optimized tools, because for me as a student, it's really important to be able to quickly prototype and run experiments. But actually my favorite part of research is getting being able to talk to other students, share ideas and finding real world problems that can be solved with new technology. So the Intel Student Ambassador program was a great opportunity for me to get involved with workshops and training sessions on campus and help build the community around AI. That's fantastic. So you've

Speaker 16

been engaging with us. How has it been so far?

Speaker 19

It's been awesome. I love it. The tools are great. Got to get some access to Nirvana, to some of the optimized cafe and things

Speaker 16

like that, and it's been awesome. Fantastic. So as a recognition for you being the 1st student ambassador, we'd like to present you with this plaque. Congratulations. Thank you.

And we have here, since you are still a student, this is a backpack with a hoodie and a lifetime supply of top ramen, I think, is in there for you. So obviously, we're going upstream to the top universities around the world, identifying people like Dan to help be ambassadors and scale the knowledge and really show that passion and put it into action. But we're going a little further upstream. Stanford also has a program called Saylor's, which is Stanford's artificial intelligence laboratory. And the idea behind this is to reach a diverse and inclusive environment.

As you well know, Intel is very passionate about diversity and inclusion. And this is an opportunity for us to reach underrepresented people in technology, in this case, technical females. So we're reaching back into the 10th grade and finding technical females who want to take their passion and move into the technology environment in their education, They don't have exposure to it. So they go through a 2 week camp where we give them mentorship, exposure to these capabilities and train them on what is possible to really infuse that into them so they have that passion to continue that forward in their studies as they go to university. So we're going as far upstream as we can to get this knowledge built in, grown and propagated to as many people as possible.

That's not enough. We have to now take all the information like Dan and all these developers are providing and building. We want to have webinars, workshops and then meetups. The workshops are designed to train as many people as we can on our capabilities. The webinars are then designed to propagate that information through coursework, as well as the meetups, which we're having 1 November 29th right here in Santa Clara, where all these data scientists developers can meet up and talk about what they're doing and share information and collaborate and accelerate their effort by having people with similar interests in getting their solution built.

Again, we don't think that's sufficient. That's awesome that we're doing that, but we want to scale even further. We want to reach more and more people and educate more and more people with all the things we're doing and the great innovations we have at Intel. And that's why I'm really pleased to announce our new partnership with Coursera. I don't know if you guys know who Coursera is.

Coursera is the largest online educator in the world. They have over 23,000,000 students or they refer to them as learners, thousands of courses. We're partnering with them to take all this knowledge upstream and drive it downstream through this massive opportunity to reach many, many people. So I'd like to invite up the COO of Coursera, and some of you didn't tell would know her, Laila Ibrahim, to talk about what we're doing. Good to see you again.

Speaker 1

It's been

Speaker 16

a while. We used to work together a few years ago.

Speaker 20

I still have my Intel badge.

Speaker 16

So welcome back.

Speaker 10

Thank you.

Speaker 16

Let's talk about what we're doing together. Talk about Coursera first and let's talk about what we're doing together. Great.

Speaker 20

Well, let me actually if you don't mind, I'll start off with a background. I spent 18 years at Intel. I actually first joined as an intern. So I'm really excited to see everything that Intel is continuing to commit for education. I was a design engineer, I'm an electrical engineer on the Pentium Processor way back when.

And through my time at Intel, I had a chance to continuously reskill and upskill. And I found that my career advanced, not only in the technical field, but then into sales, working with developers and eventually into management. And there were 2 things that I really learned. 1 is the opportunity that opens up when you have education and continuous learning about what's happening around you in the world today. And the second, through all my work with the developers, was the power of the Intel developer and ecosystem.

So I'm especially excited to be able to have my 2 worlds collide today.

Speaker 7

Great.

Speaker 20

What Coursera is, Doug gave a great intro for us. We are a tech based education platform, the world's largest. We were founded by 2 professors from Stanford to kind of keep that Stanford theme going along, 2 Stanford AI professors actually. And we our model is basically universal access to the world's best education to democratize education. And we do that working traditionally with top universities from around the world, over 145.

So if you think of the top CS types of schools, Georgia Tech, Michigan, U of I, Princeton, etcetera. And what we've recently done is we're recently just started working with companies. And we're excited because Intel is one of the first companies we're working with and the first one that we're working with on AI.

Speaker 16

That's fantastic. So our partnership continues.

Speaker 9

We're

Speaker 16

starting to scale. You're going to democratize, which is absolutely what we need to do. So how many people you think you're going to reach?

Speaker 20

Well, we'll see. 1st, we're starting with our AI course a series of AI courses together. And these will be put together in something we call the specialization, which you'll see in at the end of Q1 next year, so pretty soon. And we will use that as a foundation on moving forward. And if you think about unbundling a degree and then re bundling the key assets, that's what we'll be working with together on with Intel.

And we expect the AI course to just be the first. And what we're super excited about too is our learners care a lot about career advancement. And so the value of the Intel certificate on the Coursera platform, I think will really help people. And Intel in fact is going to take the top 10% of the learners from the course that Intel will be developing with their expertise and our understanding of online learning at scale and be providing resume reviews for internship opportunities at Intel sites around the world. And this is a first.

Speaker 16

Fabulous. Okay. Well, thank you so much for your partnership, Lila.

Speaker 20

Thank you.

Speaker 16

Great to

Speaker 18

see you again.

Speaker 20

And? Sorry, one other thing. Diane mentioned the Bob Noyce quote.

Speaker 16

Yes.

Speaker 20

And having spent so much time at Intel, I'm reminded of my favorite Bob Noyce quote, which is, don't be uncovered by history, go off and do something wonderful. And sitting in the audience today, I'm really struck by that because I think together we have a real opportunity to unleash the power and the potential of the developer network around AI. So thank you.

Speaker 16

Very well said. Thank you so much. Not only do you need to educate, that's critically important, build all these great tools, great platforms, educate everybody how to use it, but what we also need to do is solve. We need to put this acumen to the test. We really need to solve real world problems.

And to do that, we need to harness all of those developers out there, all those data scientists out there. And there's no better way of doing this than to have a competition. To harness those minds, you put a competition, you throw a few dollars at somebody and you put a competition out there, you'd be amazed at what comes out. So we're very pleased to announce that we're going to participate with Kaggle, which is a great program that pulls 600,000 data scientists into one community to help solve real world problems. They're helping solve business problems.

Example of the ones they've worked on is Airbnb, trying to figure out how to market to first time reservists. Walmart, trying to figure out how weather dictates what type of products they put into their store, as well as Facebook using it to actually source resumes for jobs. But it also solves more important societal problems, as Diane described, or social economic challenges, whether it's endangered species, whether it's looking at rainfall or whether it's trying to help people with epilepsy, using the data scientists uploading data to them and then challenging them to build models that help solve these real world problems. So I'm super pleased to talk about our new partnership with Mobile ODT. The 4th leading cancer in our world today is cervical cancer.

That's the 4th leading cancer. It hits developing countries dramatically. 84% is in developing countries. It's a very curable cancer. So what we've done is teamed up with them.

They have a mobile device that does soft tissue imaging. We're going to have those data uploaded, providing our Xeon platforms and all the tools I talked about. And then challenging data scientists to put a model together to help people detect and determine if they have cancer and what stage it's at and what the best treatment is. This is another example of how we're melding technology with health challenges and delivering better value to our world. So I talked about the 3 pillars.

I talked about the developer zone, the community where you engage, where you get access to all the tools and capabilities, where you become innovators and black belts. I talked about how we're upstreaming and engaging with students, both in the university as well as high school to ensure that the next generation developers are accelerating their knowledge in this space. And then I talked about how we're scaling that education and how we're reaching the masses and then how we're challenging the community to solve real world problems, taking advantage of that technology we're unleashing. And then we're going even further upstream and continuing our research with the top universities in the world around machine learning. We're engaging them on specific areas where we think the advancement needs to continue.

One of those areas you heard about earlier was security. We need to ensure that we look at security across a broad set of things. And so we're working with these set of universities on securing workloads. A lot of these workloads in our data sets are very, very confidential, and they want to figure out how to ensure that those remain secure. We also want to use it for security analytics.

You heard about it earlier with the USAA gentleman talking about using it to figure out fraud and other things. And then we want to make sure that whatever algorithms we build, they themselves are secure. So we have various work streams with universities to ensure that we actually look at the future challenges so that we can continue to democratize and drive artificial intelligence to the masses. So as I said in the beginning, to unleash the full potential of artificial intelligence, we need to unleash the value of Intel architecture and that's what we're doing. So thank you very much.

And with that, I'd like to bring up Doug Davis, Senior Vice President of Internet of Things, to talk about everything we are doing to solve real world problems. Thanks, Doug.

Speaker 8

That wasn't my original title slide, Doug. Doug obviously took that tagline to heart that said everyone can be an artist. Obviously have a lot of time on your hands, Doug, so see me afterwards because I have some projects for you. So thanks. I really have the opportunity now to take everything that you've heard this afternoon and hopefully pull it all together.

And I want to continue to work to convince you that the breadth of capabilities and technologies that Intel is developing really gives us the capability to be the leader in artificial intelligence for all the different kinds of environments that we've been talking about today. And I think we all easily agree that artificial intelligence is the next big computing opportunity, right? And it's happening at a scale that will really transform society in much of the way that we've seen other transformations happen: the Industrial Revolution, the Information Revolution and all of the changes that we've seen that came about as we went through those stages. And I really believe artificial intelligence has the capability to be able to do that. And I want to give you some examples as I talk over the next few minutes about what those look like.

But at the same time, artificial intelligence has been on the verge of a breakthrough for, what, the last few decades, right? So what's different? What's going to enable it to happen now? Well, I contend that all of these transformations really happen when there's enough economic value across the board for them to really become mainstream, right? And there are a number of things that I'll contend to begin come together that makes this possible.

The first is sensing, right? The ability to gather data through sensors. Well, the cost of sensors has gone down 2x in the last 10 years. The cost of connectivity has gone down about 10x in the last 10 years. But thanks to Moore's Law, the cost of computing has come down 60x in the last 10 years.

So this has really placed us at that economic threshold that we can pass through that will allow this kind of technology and the efficient deployment of the things that you've been hearing about today to become widespread and to make artificial intelligence a widespread capability. So this will enable us to take and create solutions to really accelerate solutions to problems that, in the past, would have taken hours. We heard a few examples of that earlier. Or days or weeks or months, right? And I want to walk you through a couple of examples just to continue to illustrate this.

One is in the way that fans view sports and the way that athletes train for and participate in sporting events, right? Teams are now collecting data through clothing and equipment and they're using that data through all different kinds of sources to compete better on the field to be able to predict injuries and to give us the fans a lot more statistics because we need more statistics when we're watching sports, right? We even see that NFL teams are now using 3 60 degree high definition videos from a startup company. 1 of the startup companies is called Stride VR Labs and they take this high definition video and they capture to experience them in a very lifelike manner and it helps them to experience them in a very lifelike manner and it helps them then to be able to respond to those scenarios when they're in a live game situation. Now this is designed by a former Stanford kicker, Derek Belch.

And I think it's really just a great example of how artificial intelligence is just beginning to make its way into sports. Another one from the world that I have heavily involved in is in the industrial space, right? We're starting to see applications that are evolving because as technology is moving along so rapidly, our ability to aggregate and store and utilize vast amounts of data is now giving us the capability to collect almost infinite amounts of data to continually improve the quality or the capabilities of different products. It's also giving us the ability to create efficiencies through self learning, right, and taking actions based on a complex set of interactions within an industrial environment. And also, the ability to bring machine responses to bear in solving problems that are very, very similar to how a human might respond to those issues.

And so you heard earlier that we're moving a step beyond now data analytics, and it provides the capability for factories to begin to make decisions without any human interaction at all. And as a result, it's going to make factories more efficient pretty obvious but also much safer as well. And so the other area that I think we'll see artificial intelligence really applied is to take kind of tedious or maybe even dangerous kinds of tasks that we have to do. Maybe think about firefighting or agriculture or mining and to be able to enable them to become automated. We're already seeing this today in large farming enterprises, right?

The planting of seeds all the way through harvesting of crops is done more and more by autonomously driven tractors. And of course, we're starting to see some of these technologies now as we talk about autonomously driven cars because I think almost all of us will do one of those tedious and dangerous tasks as we leave here today to go back to work or to go home or to go to the airport. We're going to get in cars. And I contend, based on the statistics, that's a tedious and dangerous task that we're all going to do. But let

Speaker 16

me go into that in just

Speaker 8

a minute. There's a context that was touched on a little bit earlier around this virtuous cycle. And this really began to evolve in early 2015. We started taking the various technologies across Intel and merged those efforts into an end to end architecture. And marketing genius, we called it the Intel IoT Platform, so it gives you

Speaker 10

that end to end capability. But it describes the

Speaker 8

then and then using analytics to extract information from all that data and to gain insights that you didn't otherwise have. And the importance of having analytics throughout from the edge to the network to the data center so we could economically optimize how we do that work. And this really put Intel at the forefront of how you create these kinds of end to end IoT solutions. And thus, we call it the virtuous cycle. But if you also think about artificial intelligence and how you create these solutions, it's the same kind of model.

In order to take a thing, connect it back to the data center or cloud and to create the machine learning, deep learning capabilities to continually make this thing smarter and more autonomous over time. So as a technology leader you saw this from Brian earlier Intel's investing in the growth of AI, and we're committed to continue to drive forward and accelerate this transformation. We're going to build intelligence into every device and give those devices the ability to connect. We can have real time communications over high speed 5 gs networks. We'll have the infrastructure and algorithms really create those true end to end AI solutions.

And of course, driving some of the partnerships in the ecosystem that you've heard others talk about today and the industry standards that are going to be essential to really make these things mainstream. I mentioned the autonomous car earlier. We think that's a great example of how these artificial intelligence systems will be deployed. And so we put together a little video to kind of illustrate what that might look like.

Speaker 21

For more than a century, the evolution of the automobile has stirred our imagination and ignited our passions. But never in that long to the phrase. I wonder what this bad boy's got under the hood. The car of tomorrow will fully automated vehicles won't be confined to just the vehicle itself. It will require a diverse array of flexible, sophisticated and fully integrated solutions from bumper to bumper inside the car to end to end of the entire automotive ecosystem outside the car.

The result will be a technological wonder on wheels that leverages the intelligent use of data to enable exciting new innovations, make driving safer, and completely redefines the concept of high performance vehicles.

Speaker 8

So as I said, we think of automated vehicles as a great example of that virtuous cycle but also a great example of how you would create machine learning and deep learning systems. I got to admit, when I'm talking to friends and I start talking about virtuous cycles and machine learning, deep learning as to how we'll create automated vehicles, kind of start looking at me funny. And so my favorite comparison is creating these autonomous vehicles is going to be like teaching a teenager how to drive, right? We load up this young man or lady with all of the learning in their driving instructor school and us parents are going to teach them about everything they need to know. And then we put them behind the wheel and we get in the passenger seat and they start driving it.

And almost immediately, they start experiencing things they hadn't expected, right? New data, new situations. And so we're going to continue to teach them and train take that trained model that we've built over the years and infuse it into their brain to make them a better driver over time, right? And so it's creating that kind of closed loop system. Of course, you realize that, that data center moves in the back seat as we get older, right?

But this is also the kind of solution that we'll create with automated vehicles. The car becomes a thing, right? And we're going to do all kinds of real time monitoring. We're going to have radar and lidar and camera systems that are going to tell us what's happening around the car at any moment in time. And through high performance 5 gs networks, we can overlay high definition map information so that we know precisely where that vehicle is.

And then we're going to fuse all that data together and provide the computing horsepower needed to apply the trajectory planning to tell the car where it needs to go next, right? And of course, those cars are going to encounter unusual situations. We're going to collect vast amounts of data from millions of cars that are going to pick up anomalies that we can feed back to the data center and continue to train that model and push it out into the inference that will exist within the vehicle. Over time, those vehicles become better and better drivers. They become more capable because of this end to end connected solution.

And so, of course, we're going to continue to invest to make this possible. We're going to nurture these artificial intelligence capabilities for autonomous vehicles. We recently announced a strategic partnership with BMW and Mobileye to bring BMW's autonomous vehicles into series production by the year 2021. And we're really working together with the best capabilities from BMW, from Intel and Mobileye to really deliver all of the necessary capabilities to deliver these complete end to end solutions that are necessary to make these a mainstream technology. Of course, at Intel, we've always believed the best way to learn is by doing.

So we're putting sensors on cars as well and driving them around and gathering vast amounts of data so that we can create those data sets and work together with Doug's team to build the tools and capabilities and optimizations that are going going to be necessary for these kind of solutions to be able to become mainstream. We're also working with Intel Labs to explore what the human machine interface will be like in these vehicles, right? We'll become passengers in those vehicles, but the way in which we interact with them will change. And we want to be able to have the capabilities to build that in and help us to interoperate with things that are becoming more and more autonomous around us. But then you might ask the question, well, why is Intel in the best position to fuel the AI transformation?

Well, I think you've heard it today. We have a rich history of successfully leading computing transformations. Of course, we know PCs and servers, high performance computing, supercomputers, networking and of course, the rise of the cloud. We've been the leader in developing open, flexible computing platforms over the years. We'll deliver the most complete set of machine learning and deep learning frameworks and tools that Doug talked about, optimized libraries.

And we're working together with developers, data scientists and even students to foster innovation and drive scale to make this wide set of AI applications more and more mainstream. And we're going to engage the ecosystem to upstream all of those technologies to release elements early so that we can start to drive scale out into the industry. And of course, we're going to use our technology leadership to integrate capabilities into our processors. Diane talked about the exciting products that we have coming that are delivering phenomenal performance, density and cost advantages in order to be able to support these kinds of workloads and technologies and of course, all made possible by Moore's Law. And we have an unrivaled breadth of processor capabilities, both in the data center with our Xeon and Xeon 5 products and the kind of acceleration technologies we can deliver there, but also the ability to put Xeon, Core, Atom and Quark processors into all the different things to create these end to end solutions with a common architecture.

And also delivering then 100 of millions of devices with Intel Architecture Technologies to create these end to end types of solutions. And of course, we can layer on vision IP, memory, storage, software, 5 gs technologies, and the list goes on. And we're excited we're very excited to have acquired the best deep learning talent and technology with Nirvana Systems as you've heard today as well. So at Intel, our mission is to invent at the boundaries of technology. Of course, there's always work to be done.

There are big challenges to overcome, and we'll work through those. Those are natural with any new technology transformation. And the same will have to be done to fully unlock the potential of artificial intelligence. But we're poised to take AI out of the labs and out of academia to cross over that economic threshold that will make it mainstream and to enable the amazing experiences that are possible for every person on Earth. So I want to thank you again for joining us today for the Intel AI Day.

And I want to give you kind of a sneak preview as to what the rest of the agenda looks like. I'm looking at my watch 20 after we're going to take a 10 minute break it says we're going to start back up at 2:25. Let's make that 2:30 so that we have a 10 minute break. And then we're going to start 2 tracks of panel sessions. And so I encourage you to get engaged with those.

I think we have some very exciting topics, and I think you'll find those very enjoyable. So again, thank

Speaker 5

you.

Speaker 18

We're trying to replicate our thought process by putting in a

Speaker 6

lot of different rules into the

Speaker 1

Ladies and gentlemen, let's get started. Please get to your breakout session and your panel sessions and we'll get started right away.

Speaker 18

Thank you very

Speaker 22

All right. If you all can make us make your way to find a seat, I'm Kevin Hyskos with the Intel Data Center Group, and I'm excited to be here today to kick off our 3 exciting panels on artificial intelligence. And our first one up today is on precision medicine, and I'm really excited to introduce our moderator of that panel today and it's Bob Rogers. Bob Rogers is our Chief Data Scientist in the Data Center Group and I'm going to turn it over to Bob to let him introduce the panelists today.

Speaker 16

Thanks,

Speaker 23

Kevin. Hi, everyone. It's wonderful to have you here. Would you raise your hand if you thought the first half of the day was really, really interesting?

Speaker 5

All

Speaker 23

right. And then we'll have to ask him why he didn't think it was interesting. So thank you, because I agree. It was fascinating. And I was, like many of you, excited about the announcements that happened earlier.

So I am the Chief Data Scientist for Analytics and Artificial Intelligence Solutions here in the data center group at Intel. I am very pleased to be here to lead this discussion of healthcare luminaries. My background quickly, I've been in analytics forever. The last 10 years, I've spent really working hard to build analytics capabilities and technologies that can bring transformational value to health care. And these four people are doing just that in their respective organizations.

So we have luminaries today from the Mayo Clinic, from Kaiser Permanente, from Cigna, and from Penn Medicine. And so I'd like to invite them to come out onto the stage right now. And if you think about where they're coming from, they've got different perspectives in terms of care delivery, insurance and of course, combined delivery and insurance. So we're we're going to get a very nice mix of perspectives. The other thing to tell you about here is that we've all decided to stand for this panel rather than to sit.

And there's a couple of reasons for that. John is leading us here in terms of transformation. He told me this morning that sitting is the new smoking.

Speaker 24

So And it's not a bumper sticker. It's reality.

Speaker 23

It's real. So he said, Bob, would you be willing to stand? I said, Sure. And we all stood. So I think that's a great idea.

And that's right. Anyone who wants to stand, you're welcome to stand. Oh, back there. Thank you for supporting us. So and the other thing is that this ensures that you all, all the way to the back, can see us, which I think is a nice benefit.

So I'm going to ask my panelists to introduce themselves and tell us why they're excited about AI and health care. So Mike, go ahead. Thanks for the introduction.

Speaker 25

Really pleased to be here. I spent 16 years in missile defense working for a DoD contractor, and ended up there as a Chief Data Scientist and spent some time thinking about how we could repurpose these technologies in different industries. And when I came across healthcare data, seeing the challenges that clinicians and patients have with converging on diagnoses and then converging on treatment, trying to understand all these complex variables, really understood that AI methods building these data models can give really insightful information to help this convergence on diagnosis and prognosis. So I've been really happy to be at Penn for the past few years and building these kinds of solutions.

Speaker 6

So my name is David Holmes. I'm at the Mayo Clinic. And I've been at Mayo for just over about 20 years now. And there's been quite a transformational change in the big data component out there, where imaging used to be the big data. I did a lot of imaging and we have developed a lot of techniques to deal with imaging and visualize imaging data.

And then this change happened where we started to instrument everything well beyond imaging. We instrumented the people, we instrumented the equipment that we used that was itself collecting data, and we started to instrument everything. And so data changed dramatically about a decade ago, and that's really the path that we're on with the big data. And the real value I think that AI is going to bring to precision medicine is how do we sift through this really massive amount of very heterogeneous data down to information that's actionable? How can we really turn that into decisions and how we can do it in real time?

Because more and more, we need to make real time decisions in health care to kind of force change as quick as possible. So that's why I'm excited about AI and precision medicine. Great. Thanks.

Speaker 24

John? So I've had the privilege of working at Kaiser Permanente and building out a natural language processing team. And we've had countless successes in mining the data and coming up with new insights that have been actionable. So I won't even list those, but I will talk a little bit about what really excites me about the future. So something I've been talking about for about 7 or 8 years is NLP for NLP and CBT, right, Avia?

No. Natural language processing for neurolinguistic programming and cognitive behavioral therapy. And the simple way of representing that is a sage on the shoulder. So there's only so much influence we can have in our children. We know that there's a whole lot of mindfulness and resilience in terms of habits that are developed early in life and that those are critical to both lifespan and health span.

It's not what life delivers us, it's how we react to it. So what if we had continuous monitoring of all conversations that our children were having wherever they were and a local analytics and feedback loop that helps them deescalate a conversation with a bully or helps them invite people into a collaboration at an early age and teaches them how to be mindful of the micro decisions we all make every day to be healthy. One other thing I'll mention is, there's pretty good evidence that if children before the age of 5, and I spend a lot of my focus thinking about how we bring better health to children.

Speaker 5

But children before the age of 5

Speaker 24

who get too many antibiotics, they end up at the age of 15 with an MRI scan showing that their gray matter is maldistributed. And there's a lot of circumstantial evidence that has a very direct impact of causing a dysbiosis that is changing the bacteria in our gut that have 100 as many genes as the human genome, which then interacts with our immune system, which then interacts with our genome and how our brain is wired. And so that pathway is so convoluted with so much data between the 100 times the genes in our bacteria that are in

Speaker 6

our gut and those that are in

Speaker 24

our own genome, interacting with the immune system and our genome and our brain and how it's wired. There's no way that the human brain can possibly wrap itself around that complexity and that large data set. And so machine learning is going to play an absolutely critical role in helping us understand those kind of connections. So there's a couple of things that I'm really fascinated by and that I believe are completely tractable with some of the data sets and some of the machine learning tools that we have today and even more so every day with all the great work that's going on here.

Speaker 4

Of course.

Speaker 26

Hi, Cameron? Hey, Cameron O, long time technologist. I went into the health insurance field back in about 2,008. Since then, I've been kind of labeled as a bit of an more innovation kind of focus group. But we do a lot of interesting stuff really to help enable a lot of our, what we call, our consumer health strategy, like the emergence of all these ancillary tools, devices, bringing those all to bear so that we can actually help support personalized experiences for customers.

And what's really interesting is we also own a lot of the how that data is now going to be carried across into their experience with the whole care provider ecosystem. So how do we actually bring their PCP, the specialist with all that information that they've been using in a personal time or the tools they use to kind of manage their condition or where they get their information. How does that now flow across and how do you actually help affect the right decisions, the right steerage at the point of care to make to obviously get the best outcome. So the AI stuff is interesting for me for a lot of pretty pragmatic reasons. With all this data, this just massive increase in data, I heard it all from the last couple of conversations up till now, the mass amount of data, I can't hire enough people to actually distill that down into a lot of credible kind of points that I can actually influence at the critical moment with a customer to make sure I've got the best information available to drive that over and kind of identify where I have to route, make decisions, support people more effectively.

And it's really AI helps us kind of get that signal to noise ratio into something manageable, right? I'd also say too that we have a lot of human processes. I mean, there's a lot of predictability that we look for in the healthcare space. We've got a lot of we have a lot of intelligence, right? There's no lack of smart people in the space.

But this concept of allowing them to work up to their license, to really do the things that matter, to use the best of their experience to bring the best outcome and maybe handle or kind of offload some of the less burdensome things. Some of the more trivial kind of things that could be just a matter of education, right, through interactive self support means, helping people get access to care when your doctor went home for the night, but you need to get some information to make a better decision. It's 2 in the morning. My child has sniffles. Getting people directed to the right type of care that they can actually get better benefit of that's more convenient to them.

Speaker 23

Can I just build on that a little bit? Of course.

Speaker 24

I agree 100%, and I think that was a brilliant description of one of the biggest opportunities. The thing I'd like to add to that is that as we bring all of these data sets to bear to convert data into knowledge and actionable information that has been steered, one of the missing pieces that we have in personalized medicine is knowing how every individual is motivated differently than every other individual. Exactly. So being able to do machine learning either real time or offline or both to understand when we deliver the right message at the right time to the right person, it's how do we deliver it? Do we inject humor?

Do we use an in app avatar? Do we use some sort of rewards in app reward system? Do we use a text or an email for a different situation? And so I like to think of the fact that personalized medicine is going to be most effective when we have the kind of AI machine learning that really looks at the motivational complexion of each individual and personalizes the motivational approach to the extent and I'm working with a vendor who's doing some of this, where we can profile people very quickly and then determine whether they respond better to humor or a very directive kind of you should do this or a set of 3 options and talk to your family or talk to your doctor. So once we begin to know this, we can create these what I call motivicons that are not like emojis, but they're actual pieces of video or avatar delivered messages or what in a large spectrum within a motivational formulary, just like we have a drug formulary today, have a motivational formulary that's personalized to the task and the individual and their motivational complexion.

Speaker 26

Can I take that one step further? Please. And then Mike will jump right in

Speaker 25

and pile on this one. You bet. We're just free flowing

Speaker 26

up here. Yes. You're the catalyst. Yes. So traditionally, with the way we actually look at personalization around a lot of our customers has been the typical kind of market segmentation approach, right?

Taking a lot of our population, building the cohorts that have some shared representative characteristics. To really get here, I think this is where AI has a tremendous, tremendous opportunity, is the concept of a segment of 1, right? As truly as an individual, you can bring all that information to bear, again, drive down that signal to noise ratio into something that you make sure that the right information is really truly present. You're building more of a rapport with that individual because that's the key to actually drive people to make better decisions and have better kind of build the patterns that kind of lead to better health outcomes. All that stuff is around kind of building rapport and building confidence and trust based on show me that you know me.

Speaker 5

Right.

Speaker 24

It's so ironic that population care is still becoming avant garde in the era of personalized medicine. I mean, I really we'd like to bury it. Yes. No, I just wanted to add that,

Speaker 2

I

Speaker 25

accessing the data. We have great collaboration with Intel in getting us there. In terms of seeing outcomes for the patients, this is grinding work that we have to get the right human factors

Speaker 13

in our face.

Speaker 25

So we have this solution that's sending out a detection when someone is at risk of severe sepsis or septic shock, basically when the body is about to collapse. And positive predictive value is between 40% 50%, it's tremendous, comes out 30 hours early. We've been running for 5 months and we see no change in our outcome for our patients, which is really interesting, right? And then you take something else that's also used in combating this, which is getting a blood culture and understanding what they're battling. The blood culture is called and the clinician, before they get the result a day later, they've already decided.

They've already decided on a full spectrum antibiotics or some other set of care. And so you have these two things that are kind of crossing, the prediction that comes out 30 hours early, too early, the patient the clinical team goes to the patient and they don't look sick, to the blood culture, which they call and comes too late. And so we're thinking about combining the alert with the order for the blood culture to say, look, you're not going to get the results in the time that you want, so how about we give you this risk value when you order it? And this is the really important work in making sure these things are effective and making sure they scale at enterprise level become kind of these bolt on things is how do you get that human factors interface, whether it's super personalized or it's targeted to a task. And that's really the challenge that we have in operationalization of all these solutions.

Speaker 6

So if I may, I think an interesting aspect of what you're trying to do, which is change the system a little bit and what you've described in terms of trying to personalize to the patient, one aspect that is sometimes not at the table, and I think you maybe can speak to this more than the rest of us, is we also want to be able to model a provider and their mentality

Speaker 27

of how

Speaker 6

they're going to interact with a patient. So we've got some providers who are very high risk, high return, and some patients respond well to that. We have other providers that take a very conservative approach, and that works out really well. And only when you get that interaction as part of the AI model, where you're modeling the provider, you're minding the person who's paying the bills, you're minding the patient, can you really get a holistic view. And that's really hard for us to do individually.

We can do a little bit there, but we have to bring in data from all these sources. And that's where AI really provides a holistic model, including the provider view and the patient view, and then of course the payer view. Yes.

Speaker 24

I think one of the greatest examples of that is 1,000 times a day, a 45 year old woman walks in the office and has newly diagnosed diabetes and hypertension. And you can take in all the data from everything you know about her and you can do the mash up between the universe of knowledge in big data and the universe of the end of 1, and do the mash up and say, well, here's the 3 options. Option number 1, you're going to radically change your lifestyle and you won't need any medication and your diabetes and hypertension are going to go away. Option number 2, take these 2 drugs and come back in 2 months. Option number 3 is somewhere in between.

And the woman says, well, the reason I have hypertension and I'm eating too much and I've got diabetes now is because my daughter is getting married in 3 months and I'm going through all the wedding hassles. And what I want is I want option number 2, give me the 2 drugs, and I'll come back after my daughter is married and in laws are all out of town, and then I want option number 2. But in order to support her decision in what she wants to do, when you do exactly what she says and lay out what those three options are and allow her to drill down to the extent that she is capable with her health literacy to begin to understand what those options represent across the waterfront. And machine learning is going to play a big role in both the construction of those options, as well as the ability to drill down into the reference information in the NF1.

Speaker 26

Right.

Speaker 25

That's really what we're focusing a lot in this kind of dialogue of cost benefit, right? The AI can help with that, but it's a dialogue waiting back on the clinician or the patient. And that's a really important piece that we've identified. Initially, we just came out with like the blinky light, like we're going to

Speaker 18

send out these alerts and

Speaker 25

just insight and wonderful things will happen. And we're just not seeing it to a level that we want, not a big surprise. And we're really working

Speaker 26

to that next level to provide those decisions we've made. Yes. So interesting about the so my perspective has been that traditionally a lot of the health care delivery stuff has been very quantitative. I mean, clear measured data, biometrics, labs, things that you can actually measure. Where I'm interested specifically and I really think AI has got a tremendous opportunity is more of this qualitative stuff.

The stuff that's kind of more implicit or derived intelligence about how people are interacting with systems that kind of fall outside of the normal health care delivery place. So people are going to Doctor. Google to get a lot of their information. There's a lot of other financial information, personal details that we haven't captured or collected before. A lot of that information and kind of identifying positive patterns using things like saffron, things we hadn't even kind of considered to look at because most of our traditional data modeling approaches or data science approaches have been been very the kind of rigorous scientific approach of saying, here's an expected outcome.

I'm going to go after that and build models that kind of prove or disprove that. Where AI can help us, I think, is finding some of those kind of new nuances in all this kind of data to figure out how we when we make a decision around a treatment of care or when we have an opportunity to intervene, that stuff can help us kind of shape it in a way that makes it potentially more effective or maybe get us to a better outcome by leading the patient.

Speaker 23

And there

Speaker 24

was a go ahead. I was just

Speaker 6

going to say, so I want to throw a question out to the panel, because I think we're converging on some ideas here, the N of 1, the qualitative, where AI can impact. But there's a looming question that we're all going to have to face as data scientists. The traditional model what works about the traditional model of study is you've got a rigorous scientific experiment that you can validate and get to an outcome and say, I know this is true because I've proven it. Now we're talking about the N of 1. We're talking about doing qualitative things.

How are we as scientists going to validate that in real time and provide and so that when we give some information to a clinician or a patient, we have some confidence around that.

Speaker 26

We said

Speaker 27

it might be

Speaker 6

an end of 1, but we still think we're pretty good. And how do we control that kind of risk of kind of going off in the wrong direction?

Speaker 26

And that's why, like, I know that our approach specifically is, I mean, so we can talk about all the wonderful things that AI can do someday. But because we're in a very highly regulated environment, we have to get it right. You can't get it wrong. You can't use predictive modeling to actually say, well, I've got about an 85% certainty. I'm going to make a decision on a care path for that.

So I mean, our pragmatic approach is really how do we actually, like I said, get down some of those kind of more pedestrian, kind of more prosaic things, like take those out of the system. So the people that are there are licensed to do this appropriately. They've actually got less noise. They can actually focus on making sure that they're doing the best care with the best information. And as these things kind of build up, when we start to build some confidence, either by shadowing along like virtual assistants, like shadowing along with some of the people the way that we're getting on the phone and supporting somebody in a disease management case or case management and building and understanding some of those patterns almost by shadowing along.

That we can get to a point potentially where we can really start to do dress rehearsals on this stuff and figure out when it is the right point to kind of allow these big critical decisions to leap over to a technology?

Speaker 25

Yes. I think it's a really important place to have publications, to have shared experience. We haven't worked that out yet. And there's an area of just causal inference, like understanding, all right, so if we're detecting that someone is likely to have severe sepsis and there's some care intervene and they didn't, what is that? We want these false positives to increase, but in a way that's a good thing, right, understandable and expected.

And it's an important area of study that we need to contribute to, to make these things more scalable, more plug and play, I think is important.

Speaker 24

And I think what's fascinating is in the field of statistics, which is one of the most challenging disciplines in math, there's 3 new ways of looking at data that are being statistically advanced in this new context. So in mobile healthcare, the stage wedge statistics to be able to add people to a cohort over time and still be able to do valid analytics is 1. The other is, to your point, single subject studies. So you see what happens along the course of this N of 1 and start drawing conclusions about that one individual for which there's no one else just like them unless they have an identical twin. And even then they're not the same because of the transcriptomic interaction with the environment.

And then the third one is, to the inference of a causation. And there's a whole new statistical field called data interrogation, where you look at the big data analytics and you find something interesting and you can begin to query the data in different ways and different data sets without having to actually do prospective randomized controlled trials to actually be able to cross validate, is this spurious, is this real, is it causal? So those three areas of statistics address each of those three different questions. And they're invented because of the new access to data that we have today and our ability to become much more personalized. We need to work together.

Speaker 2

Yes. Here we are.

Speaker 6

And Intel needs to

Speaker 23

keep us in the room.

Speaker 26

I think we're just having a private conversation.

Speaker 23

So what we've obviously got an opportunity to create insights from data to understand context, which I think we can probably all agree that the concept of AI really is about understanding context. So what's important to me right now, given what's happened recently, what's happened in the past, who you are, where we are right now. So how have you instrumented these insights into your organization? So I'm curious about what technologies or approaches have you used to get to that point of having an insight from data? And then how are you instrumenting that?

And particularly, if we think about AI as something that augments human capability, so it's a sort of 3 part question. If you think about AI as something that augments human capability, which humans are we augmenting for each of your experiences? Where is the bang for the buck? Because it could be talking to providers or administrators, case managers, patients. There's a lot of opportunities.

So what's your experience instrumenting it and influence?

Speaker 19

Right, right.

Speaker 25

So we've taken a couple of approaches, trying to a way to categorize it is, we're looking at cases that are very rare, very impactful. And it's a burden on the clinicians to kind of monitor these very rare, but when they occur, big deals for the patients. And so severe sepsis, maternal morbidity, stroke, these are things that we're building models for and sending out these alerts. And we're starting very simply where we're just notifying the clinicians. In some cases, there's just be vigilant kind of response, and that's the severe sepsis where we're very underwhelmed in the response.

In others, there's we have PREVENT, there's a spontaneous breathing trial that the respiratory therapist will go and conduct, and we've seen better results there. So things are very actionable. They're in line with the care that is already being provided or intended to be provided or things that we find better results in. Those are things that are interesting, but really hard to manage, because these things that have very low prevalence are going to take I mean, for one case, it took 5 months just to say, oh, we missed

Speaker 26

the mark, right? So now

Speaker 25

we're going to dial the knob a little bit, and we're going to wait another 5 months. And we're going to do that because it's worth it, but it takes a long time to converge. Then there's another set of problems where there's little decisions made all the time. And if we can just optimize them a little bit where it's levels of care decisions, it's there's people that go around our hospital and just make sure things are documented correctly.

Speaker 23

At an administrative level. At

Speaker 25

administrative level. Lots of little That's right. And so by looking at these places where there's many decisions made frequently, and we can just optimize a little bit how the humans are going through in creating these lists, then that's going to be a value to them. It's already in line with the work they're doing, and we can justify the value to the health system and the patient by the outcomes.

Speaker 23

Right, right. And we see that a lot, the opportunity to prioritize based on the very specific circumstances. So David? Yes. So in our case,

Speaker 6

while we're involved with things at the individual patient level and administrative

Speaker 15

level, Mayo kind

Speaker 6

of has a bit broader of a strategy. We're really working towards what Mayo has always called the value equation, which is quality over cost. And at one point in time, a decade or so, people thought the value equation was a single equation that worked in large groups of people. And now with individualized medicine, precision medicine, we want to make that value equation be really at the individual level. In order to do that, you need that individual level context that you're talking about.

So for us, the context is really what is the personalized cohort that really mimics you the best. There's traditional ways to do that. We happen to use a graph computing sort of a model to build context. Graphs are very good, because it deals with heterogeneous data very well and you can build patterns pretty straightforwardly. And so we use that as we get our context from that and then we use that context to build out a model that's the value equation, quality over cost, for that person.

Because even though you might look like even your twin sibling, it's really not the case that you need the same care, whether that be a human factor component or it be a genetic component or be a where you are in your life component. All those things are very different, the environment you're in. So it's building those models in real time we need the AR for, because we can't do that value equation very quickly when we're doing a pencil and paper.

Speaker 24

Right. So 2 days ago, I had the privilege of spending the day with Vince Cerf, who's doing a number of global initiatives around, one is innovation for jobs and the other is people centered Internet to bring the Internet to the darkest corners of the world to help bring people into the digital economy and to address the issues of loss of dignity with job loss. And in that his objective of starting that initiative 7 years ago was knowing that AI and robotics were going to displace so many jobs so quickly that we weren't going to be able to keep up. So last week at the Techonomy Conference run by brilliantly by David Kirkpatrick, there were a number of open discussions like this about how that's going to play out and sort of a consensus among other people who have thought about this a lot that we have a problem maybe for the next 10 years of disintermediation and loss of jobs. But in 10 years to your question, who's going to benefit?

Everybody. The number of jobs that are not being done just in healthcare alone today is huge. And so the opportunities for whether it's the administrators or the analytics group themselves or the physicians themselves, the nurses, the pharma, the patients, they all are in a position to have collaborative relationships with AI and robotics that will help it. What I see is our big opportunity is to use augmented reality and virtual reality, which have been shown to accelerate learning dramatically. And this field is just exploding as everyone here knows.

And to be able to try and apply AR and VR to how we integrate the AI robotics and the human not The last thing I'll mention with respect to that is, I had the good fortune of getting to Andy Groves before he died. And I think most people know he's from Budapest. And so one of the biggest problems in this country right now is if you go to prison and you look at who's in prison, almost everybody there has a learning disability or a language disability. And they get bullied in school and they have problems learning and integrating society. So I think one of the coolest things about the collaboration of AI and machine learning in the space of natural language processing, but not exclusively, is being able to do early identification of where children with English as a second language have language issues for which today there are no adequate tools for early diagnosis and intervention.

So by the time you find out that an immigrant child has a learning or a language problem, they've already been subjected to a lot of discrimination above and beyond there being children of immigrants. And so to the extent that we can apply this to bring out the best in our immigrant population, like Andy Grove's from Budapest, I think we have a huge opportunity to expand the job market and keep people who would live a life full of dignity out of prison because of undetected problems in an early age. And machine learning, I think, is going to play a huge role in that.

Speaker 17

It's great.

Speaker 5

You had

Speaker 26

a big question. There was 3 parts to it, right?

Speaker 23

You can answer any of the above questions. But I guess the question was, how have you instrumented AI and who's being augmented in a meaningful way?

Speaker 26

Well, so I guess the answer is 2 parts. It depends on everybody, right? We've so traditionally, I mean, health care has not moved very quickly. I mean, the way the information moves around, From my lens, working for a health insurance company, I mean, it comes in through claims. Claims may indicate something that happened a month ago.

And I'm using that to kind of draw inferences about what's going to happen potentially in the future. And so as we move forward, I mean, we're starting to get in this business now where there's kind of more focus on risk with the providers and how we actually managing that risk, predicting behaviors, managing population health more effectively. Using a lot of these latent data sources, I mean, the need for us to speed that up to make it relevant, especially as we're creating these alliances with providers, because I mean, we do manage risk. That's kind of what our bread and butter is. Helping bring that kind of competence out to the front of at the point of care is very helpful.

But it's not going to serve us very well if we're still using all these latent sources where the information the information that's in the EMR is minutes old, and it contradicts what I'm sending, right? So and when we talk about like who we're going to automate, I mean, I don't know that it's necessarily a person in all cases. And we talk about like our end goal is making sure the customer

Speaker 5

experience is personalized. We'd also like

Speaker 26

to make sure that the workflow is king and about how you actually get people in, deal with them effectively and get them to a better outcome. So using that same kind of level of inferencing and intelligence to kind of personalize how the information is delivered to a care coordinator that's sitting out at an accountable care organization. And they're making sure that they call out and get Linda in for her appointment because we just found out some information that she's got a potentially undiagnosed condition. You have to make the information as effective to them as possible. So they can go about their day and still meet their value objectives.

And So everybody in that whole value stream is pretty important. Traditionally, we have a lot of programs we do around clinical management, condition management, behavioral, case management, helping people get referral for services so they can get in. All of the people associated with that, there's an opportunity for AI to kind of automate, make sure where possible that we're actually staying within compliance or making sure that we understand where there's variance and how to align to kind of any standard operating procedures and practices, etcetera. So I guess that's why I said it depends on everything.

Speaker 12

Right. Can I

Speaker 25

come back on that just real quick? I mean, I think the consensus here is around optimization, around personalization, around insights. It's not around a driverless health system, right?

Speaker 22

And that's Exactly.

Speaker 25

And that's You know,

Speaker 15

there's you know,

Speaker 25

you hear things that we're going to get rid of doctors,

Speaker 5

we're going

Speaker 25

to get rid of nurses because this is all optimized. And it's a little bit

Speaker 26

like Andrew Ng, one of the

Speaker 25

founders of Coursera, made the statement that it's a little bit like worrying about overpopulation on Mars. We're a long, long, long way from even having to address that. And it's really about getting better outcomes and focus outcomes, helping people that really otherwise wouldn't have been helped and unburdening our care teams to do exactly what they want to do. And for all our solutions, we are pulled through by the clinicians. They're banging on our door to help them deliver better care.

Speaker 24

And one of the things that's come up a couple of times is the human factors and the soft side and the social determinants of health and we haven't really dove into that much yet. But I think it's important to recognize that in the human factor side, the household we live in, the community we live in, all of the social factors that influence our health are profound. There's a ton of literature validating that. And yet if you look at the complexity of what drives us from a social perspective, it's fast. We now have tons of data that help that are accessible to help understand this.

So to the extent that we can link community health and the social determinants of health with personalized healthcare, I think we can really abandon this notion of population healthcare where we just we classify people according to 1 or 2 diseases or 1 or 2 lab values and begin to really look at this broad complexion of the human experience. And to your point, we'll never have a driverless healthcare system as long as people live in communities that provide for very rich and profoundly deep experiences.

Speaker 23

Totally agree. So we're getting close to the end here. We've talked a lot about different ways that data can drive understanding of patients, of disease, of, you know, different workflows for how to deliver care for, you know, even coaching. So there's a lot of different things that we're doing and different things we can do. If you in your vision, if we're really successful with AI in the next, let's say, couple of years, so not 10 years out, but in the next couple of years, what's going to be better about our healthcare experience as people trying to stay well?

Speaker 25

Yes. Just real quickly, we're very much in the beginning.

Speaker 26

I think we're helping to converge

Speaker 25

right now, focusing a lot on diagnoses. I think in the next 2 or 3 years, we're really going to help with those cost benefit decisions and personalizing those things. We're not there yet. There's a lot of great challenges ahead of us. But I think it is in the horizon of 2 to 3, 5 years.

We're going to start seeing that and seeing a lot better outcomes for patients.

Speaker 12

Great. So better outcomes. David?

Speaker 6

So my view on it is, and it is just my view, that to the individual patient who walks into the hospital to talk to a doctor, it's not going to look that different. It's just that when they leave, the outcomes will be better. And it will be better because so many things are happening behind the scenes and you don't know what's going on. But in the end, you still ask your doctor and say, doc, I don't feel well, Help me. And the doc will do that.

And then you just know that when you leave, you're better off than when you started. And we have that now, we really do. But it's going to be more precise, It's going to be more effective. And at the end of the day, it's also going to cost less. Thanks.

Speaker 24

John? Today, 50% of the primary care docs in this country and across the world are experiencing burnout. And the reason they're experiencing burnout is because in the digital age, you're confronted with every potential care gap that might have possibly existed and addressed for the past 20 years plus those right in front of you today, and it's overwhelming and people are overwhelmed. And so my view of how things are going to change is that we're going to use modern technology to restore ancient wisdom and allow doctors and nurses and pharmacists and healers of all sorts to do what they wanted to do when they went through their training to go into the healing professions. And that is to be compassionate and caring and listen to the patient and empathic and actually understand how this person is different, put yourself in their shoes and be able to understand it.

Today, our physicians, nurses, pharmacists are so overwhelmed with disintegrated data and disintegrated advice and incomplete synthesis that they're burned out, they're really burned out. And so I see the difference in the experience of the person passing through the healthcare system is twofold. 1, we'll try and keep them healthier in the 1st place by providing more mindfulness and resilience, so they don't get sick in the 1st place. And secondly, if they do get sick, we'll have people on the healing professional team as well as the personal care team that has an opportunity to invest in understanding being empathic and being compassionate in care. And I think the person experience is going to be vastly different.

Speaker 7

I mean, if I take

Speaker 26

a look at where I think it's going to be in a few years, I mean, I would say, I mean, I mentioned this a lot during the talk, but it really has to be personalized, right, to drive better outcomes and drive value to people. And I think it's going

Speaker 5

to become much more convenient.

Speaker 26

So when you come in, I mean, people get the right level of care that's really needed for where they are, right? I mean, helping our doctors or care providers and care practitioners kind of work up to their license and do the things that they're really that merit their time and getting the other stuff out of the system so that we can actually make the right decision, the right service. And you're going to see a big kind of cost benefit to that too across levels. People are getting more information earlier. They're not burdening our health care system.

They feel better. The information that really kind of informs who they are as an individual is being brought to bear at that critical point of care. So there's actually more confidence that what they're getting is appropriate. They're not doctor shopping and some of the behaviors that we see today. So I mean, that's what I would say.

It's going to be cheaper. It's going to be personalized. It's going to be more convenient.

Speaker 23

More convenient. All right. So I think this was really interesting the way I would summarize. I think I'm going to steal from John and say that the opportunity for artificial intelligence in healthcare and precision medicine is not to make healthcare less human, but actually to augment and make it easier for us to make healthcare more human. So I want to thank the panelists, and I think it's a really interesting conversation.

Speaker 26

Thank you. Thanks,

Speaker 22

I don't want you to go far because we've got 2 more really outstanding panels on AI to come. We've got the Autonomous World panel and the group has told me just take a 5 minute break. So if sitting is the new standing, you know what I'm talking about,

Speaker 8

smoking.

Speaker 22

So if that's the case, 5 minute break back here for Autonomous World in 5 minutes. Thanks.

Speaker 13

Hello, and welcome. I am absolutely thrilled to be standing in front of all of you for our 2nd panel discussion today. As we see more and more industries and market segments advancing towards autonomy. It's clear that AI is becoming the fuel for that innovation in autonomous things and autonomous systems. And so for this panel, we've assembled an extremely distinguished cast of characters who bring a diverse set of experiences ranging from the automotive industry to the manufacturing sector and the energy sector as well.

And we also have a guest from the academic arena as well. So I'm Brian McCarson. I run IoT Strategy in the Internet of Things Group at Intel. And let me now introduce our 4 distinguished guests. Now, we're not going to stand, by the way, just so you know.

We contemplated standing behind the chairs awkwardly and just holding them, but figured that wouldn't work. So let me first introduce Kathy. Kathy is a Manager for Advanced Analytics at Devon Energy. We're also lucky to have Jaganath here as well, who is the Head of Data Services in the Digital Factory Division at Siemens. And we have Reinhard Stohl, who is here from BMW and is a Vice President of Artificial Intelligence and Machine Learning.

And our other honored guest is Professor David Yoffe, who is here from Harvard Business School. So let's go ahead and have a seat. So thank you to everyone for coming. What we'll do to get things started is go through and have each of you speak a little bit about how artificial intelligence is playing a critical role in the evolution and advancements of autonomous things and systems in your respective fields. So Kathy, we'll start with you.

Speaker 11

My name is Kathy Ball. And I was going to tell you a little bit about how artificial intelligence is changing the landscape of oil and gas, particularly as some of that commodity prices really start to bottom out. You have to be cost effective. You have to do things in new ways. So the industry is actually starting to really adopt some of the AI.

And what we focus in is the autonomous drill bits, even down to the autonomy in the digital oilfield and getting the value out of that and getting past the hype.

Speaker 5

So I think I'll start off with trying to define what is autonomous manufacturing. I think at the very extreme end of it, I can imagine a scenario where you get a digital model that comes into a factory as an order. That digital model is then put into production by completely autonomous machines. You get the product at the other end to the right quality and specifications as the model defined it. And then you have logistics, autonomous logistics, shipping that and delivering it to the person who ordered that.

Now in its extreme, this would be autonomous manufacturing. But I think there are a lot of foundational pieces that needs to be put into place for this to happen. And that's something I hope we'll discuss during this session here.

Speaker 13

Fantastic. Reinhard?

Speaker 27

So artificial intelligence is transforming our product in at least 3 main ways that I imagine. So one is it makes autonomous driving possible, right? So we have new ways of perceiving the environment of the car. So the car has a way of interpreting the situation around it and then finding its way through the traffic negotiating with all the other traffic participants. So that's of course, that's the major impact of AI.

Then the second impact is personalization and adaptation. So providing context to any situation that the driver or the car may be in and also user interface is becoming more natural, gesture, natural language, everything just being less brittle and more robust in a natural intelligent way of interacting. And then the third way is that the whole notion of mobility and ecosystem system of mobility will just change by the influence of AI. So rather than cars as for the individual driven by individual minds trying to get from A to B, the whole ecosystem will be connected and use that connectivity to have an overall intelligent solution. You may think of intelligent car sharing and so on.

Speaker 17

So I don't have a similar story to tell. Obviously, as a professor, I'd look at this a little bit differently. So every year, I give a lecture on what are the new technologies I think executives around the world need to be thinking about. And I try to look at things that I think have impact within the next 18 to 36 months. So this year, AI machine learning is the topic that we'll be talking about precisely for all the reasons that the panel has described.

But the business problems associated with actually making money out of that technology is much more problematic than just saying it's a technology that's going to be important and relevant across the board. So let me just raise 2 or 3 questions, which again, I'm sure we'll come back to and for the context of the panel. One is the problem what I like to call looking forward and then reasoning back, which is we can look forward and sort of understand what this is ultimately going to do. But thinking about what the technology delivers 2, 3, 4 years from now, then trying to figure out, okay, what does that mean I have to do today, because we aren't there in actually many of the technologies that we've been discussing in today's conference. So problem number 1 is how do we reason back to the actual strategic implications for today?

The second question that I think we have to think through and understand is ultimately for this to be successful, it has to be a true platform and not a product. And a platform means it has to have a degree of openness, it has to have a degree of sharing of data across firms, which is challenging again. So BMW wants to do something distinctive from Audi and wants to do something distinctive from Mercedes. Yet ultimately, for us to be able to deliver on many of these AI solutions, we have to figure out how do we make these not just products, but platforms because their success depends on that platform structure. And then thirdly, related to that, I think we have to think through the potential for network effects.

Because in the end, AI is going to be successful and have the ability to drive the value we all hope if we can find ways that everybody benefits by virtue of everybody else doing the same thing. And again, in a world where we try to do things in a proprietary way, driving those network effects become problematic and that actually slows down the ultimate adoption and success of a technology such as AI.

Speaker 5

Can I piggyback on that a little bit? Yes. I mean, and I think what you just mentioned about we are not there. We look at it's absolutely true. When I go into the manufacturing space and those of you who've been in the manufacturing industry, control systems have been in manufacturing for the last 30, 40, 50 years.

So manufacturing plants have always been generating data, huge tons of data. The problem is we never exploited the richness of the data, either we didn't have the technology or we didn't have the compute power. But if we did that, which is possible today, and I think that is one of the foundational pieces that we need to add, IoT brings in that foundational piece, connecting all those data sources, connecting those machines, gathering those large big data sets, because when you do that and apply machine learning, deep learning, now you start getting production models, production models for different operating environments. And production models are a very important element for autonomous manufacturing. And I think these are the foundational pieces that we need to put in place.

Speaker 13

Excellent. On one of those principles that we just described, and you highlighting the 3 different categories of consideration, autonomous driving keeps showing up as a breakthrough technology that's bringing autonomy and artificial intelligence closer to the consumer than ever before. We've all in U. S. Who flew here probably flew in an autonomous airplane where the pilots are just there to observe and be around in case something goes wrong.

But the actual idea of ownership of an autonomous vehicle is now becoming in the realm of possibility as the cost of the technology and the sophistication starts to grow. Can you comment more in your field what sort of challenges and obstacles do you feel like the industry needs to overcome in general, both locally and globally? And what sort of approaches are you trying to take to overcome those obstacles?

Speaker 27

So we are working on a very, very exciting, but potentially world changing problem, which is autonomous driving. And the first challenge is that this is something that a couple of years ago, we still thought, everybody thought this is sort of very many years out there before it becomes reality. And suddenly through these advances that more or less came overnight in the last 3 to 4 years, especially with deep networks and most prominently image processing and computer vision, it becomes sort of reachable, right? And like Davis mentioned before the announcement that we said by 2021 together with Intel and Mobileye, we will build this open platform that has some degree of openness, and we are hoping to build something that is adopted across the industry and becomes mainstream just because we believe the problem is so difficult that if each of us tries to solve it alone, it will just be harder and take longer. So some of the very difficult problems is bringing it from the research lab to a product, right, and not making it work some of the time or most of the time, but make it work all the time, right?

And so we as a car company with a long tradition, we have a lot of expertise in functional safety, right, and making sure that it's actually safe and our customers can trust our product. And then bringing autonomous driving to that level where our customers really trust it, That is one of the challenges that we have right now. And as to how do we convince ourselves that it works not just in the cases that we tried out and the ones we envisioned, but in all the cases. And another difficulty is that what we call the cognitive architecture. So there are so many different techniques in AI and machine learning, the subfield of AI and there are so many different machine learning techniques and they can be applied to many of these sub problems of autonomous driving.

And the question is on which one applies best to which of the sub problems and how do we combine it to the overall cognitive architecture. And now sort of the first time in, let's say, history of like car making, we have this chance of not just building individual functions, driver assistance functions that we build in the car, but we give the car this overall broad competence of driving autonomously. And then that is just very exciting, but at the same time also quite difficult, right? And then if I may add one more sort of aspect of a difficulty, less from an engineering point of view, but maybe more from a development of an organization point of view. If you see how the whole mobility is changing, the whole world of mobility is changing.

And we are stepping one step back and saying what's the business we are in, right? So you could say, well, we are in the business of building really wonderful and exciting and emotional cars. But really the business we are in is providing exciting and wonderful mobility, individual mobility to our customers. And so in this new world of sort of transformability, in addition to just building wonderful cars, I just saw new world of services and that one is driven by artificial intelligence and for many years has been driven by software. And so 15 years ago, we founded a small software research lab at the time, right?

And now over the last 15 years, we have hired like literally thousands of software experts like myself and to prepare and drive this transformation.

Speaker 17

Let me make 3 quick points about your comments. Number 1, it's really interesting to think about not just the capability AI delivers to do autonomous driving, but all of the other pieces in the ecosystem that need to be developed in order for it to occur. So again, for example, how do you deliver high very high definition maps that are updated continuously on for every street in the world. This is an example of what I was describing earlier. If we don't have an integrated platform that is driven together by network effects, we can never actually accomplish this unless it ends up being in very limited and narrow areas.

That was number 1. Number 2, it's important to also think about some of the behavioral challenges if we ultimately want to make money here. I've given lectures over the last couple of years on autonomous driving. And I've had people from all over the world, people stand up and say, this may be a great technology, but I will never give up my car. In other words, as great as the technology may be, there's a behavioral component that's going to have to change.

And that's a change which may be almost as difficult as the technology problem about how do we actually convince people that this is great technology and it may be safer, but these people love they love sitting behind their wheel and driving. So we have to think about how do we solve that challenge. And I had a third one, but I

Speaker 3

can't remember what it was.

Speaker 8

As soon as it comes back

Speaker 13

to you, just jump right in.

Speaker 5

But I think I mean, to your point, I mean, so I drive one of those autonomous cars. And I can tell you, when you sit on that wheel and you're in autonomous mode, you're always thinking what if something goes wrong and it's going to wreak havoc on that street. Now I come back in that context of the manufacturing industry, one of the most conservative industries. And you can imagine one of the most conservative industries getting into a complete autonomous mode, And it's also an industry where technology got developed to address individual applications. And therefore, standards are always developed after the technology was developed and then different geographies had different standards.

Now if you have to bring in an element of autonomous manufacturing, just like in the autonomous cars, you've got to have rigid rules, rigid ways of how it is operated, Otherwise, it is going to wreak havoc. And I think that's a very important aspect.

Speaker 11

And I'd like to just chip in a little bit from our industry, and we're jealous, by the way, because we're out in the middle of nowhere. We don't have Wi Fi, and we can't see where we're going because we're going drill bits under the ground. You can't see what's in front of you, And the measurements for the drill bit are 90 feet behind you. So I'm greatly envious of everybody up here. But I'd just like to tag along with what they're saying.

And AI and to get IoT accepted, there's a human component and a mechanical component. And the human component you're looking at, I had one of my European colleagues say, this IoT stuff, I don't want to hear about it. But I want to hear about sensors where we can take some information, provide analytics and then take an action. It's all I can do not to laugh. So sometimes you just have to be really careful about how you speak, how you present it, because some people aren't quite ready for it.

But if you call it cyber operations, they're all over it. So you just got to roll with it, try not to poke her face, smile it, try to get them at the level that they can advance. And the second piece on the human side for us is on artificial intelligence. We could have solved the problem using AI from the very outset, but the engineers weren't ready for that. So we had to do a crawl, walk, run, starting with, all right, we're going to have you tell us what the pattern is.

We'll mimic it, let the machine learn from it until they're comfortable with that. And now they're like, okay, we trust that machine. I'll let it do it and then see how that happens. And so we've gone from them telling us the pattern to the machine finding the pattern and doing it in seconds. And this is just a huge improvement.

And now it's let's beat the machine. So it's just a lot of fun.

Speaker 13

And the human factors you're talking about there are allowing the emotional release of control from someone who traditionally their job was defined as being the person who made those decisions and convincing them that they have the ability to either intervene if something goes wrong or to allow them to build up trust in the way things operate. And there's certainly going to be technology acceptance challenges behaviorally and network effects, to your point as well, where there's other adjacent industries that we didn't quite realize were going to create some obstacles in doing that. Do you have any other examples from the industrial sector and manufacturing where you've seen us already make breakthroughs in AI, but perhaps we just weren't using the language?

Speaker 5

No, there are major breakthroughs that have happened. I mean, especially, I mean, we talk now in the manufacturing world about the convergence of the virtual world and the real world. So the virtual world is where a lot of technology innovation has happened, the virtual world being. Today, you can design a product part by part, piece by piece, have it as a data model, you can simulate every piece of that, you can simulate the whole product and its functionality, you can design and simulate the manufacturing system that would be needed to do that and all this before a single screw is turned. So the virtual world has evolved a lot.

Now this gets transitioned supposedly seamlessly to the actual real world of manufacturing and operations. And this is where now the technology begins to innovate itself. This is where IIoT and those things come into play. So there's a lot of stuff happening out there. But I think there are also a lot of foundational pieces, which are still missing.

Speaker 27

I wanted to get back to your comment about people will need to change their behavior in order for autonomous driving to happen.

Speaker 17

Some people do, not all.

Speaker 27

Some people. So I can relate to that because I'm lucky enough to be able to get to drive a lot of BMWs and it's something you don't want to give up like immediately. And you say, well, it's really a lot of fun. But there are times when it's not fun. So last night, I was driving from Mountain View to San Francisco and I said, well, 2 hours should be enough, 2 hours was not enough.

And not a single second of these 2 hours was really fun having the ultimate driving machine. And so in these cases, it's I think people will be very quick to actually adopt autonomous driving because they get their time back, they can do their emails, they can realize and so on. I think once the affordances are large enough, people will very quickly adopt it. The other question is trust, right? I think that's the more important one.

Will you actually trust the machine to bring you from A to B in a safe manner? And I think that is our responsibility to make sure it works.

Speaker 11

And also realize the machine is not always right and get that expectation level down to say it will have times where it's wrong, but it will also do the self learning and learn from its mistakes as well.

Speaker 17

But again, this is my point about looking forward and reasoning back, which is at what point are we going to be confident that it is 100% safe, right? That we've seen obviously people trying to prematurely take advantage of the technology and people die. And I would worry about that certainly in both energy and industrial as well as in the consumer space such as automobiles. And so the critical question is, are we doing the right thing at the right time, not trying to jump too far ahead, which today there's so much enthusiasm and excitement about it. The natural tendency is to jump to the end state rather than say there's so much work that has to be done.

We can't prematurely try to adopt the technology before it's actually ready.

Speaker 13

And that transition in many senses is how you build confidence over time. If I think about my own experience working in the semiconductor manufacturing industry at Intel over the past few decades, The basic precursors of artificial intelligence are simple rules engines that are if then statements. And then over time, you start to realize I have 100% confidence in that very basic rule engine. And I've discovered a few other areas where I could have 99% confidence if I added more complexity to it. And so these precursors of automated process control have been in existence for quite some time.

And that allows, I think, people that are working in that environment to start to gain that confidence. Where do you think that we're going to have challenges, this is for any of the panelists, where those breakthroughs may be so rapid and so quick that it's going to create either other network effects or social acceptance or adoption challenges that we would have to overcome? And what role can policy from a government level or standards from an industrial level in combination with academia influence that?

Speaker 11

I look at the democratization of some of the AI software and trying to say how can you give common standards, because that's currently missing, because we're having to do cloud to cloud, hybrid clouds, we're doing deep learning on top of that. But we're also trying to say how do you get we call it analytics in the fog. How do you get it closer to the source so you can make a faster decision, where the latency is in the microseconds at a level that you can trust it? But you can always solve the machine side. How do you solve the human side?

And that's a piece that you're going to if you miss that piece, you've only solved 50% of the equation.

Speaker 23

Yes. And I think

Speaker 5

in the manufacturing sector, there is no factory you walk into today, which has only one brand of equipment. So it's a very heterogeneous environment. And historically, in the manufacturing industry, these equipment have never operated with each other in any efficient manner. So interoperability is going to be a huge, huge thing. The other aspect is, therefore, you need to have standards, which enable that interoperability because that's going to be how do you create autonomous manufacturing with keeping in mind that it's got to be completely vendor agnostic.

That's a major challenge. The other thing that I mentioned, I think as you go across geographies, the standards are not only different, they're completely different. So that's the other challenge. And I think the third challenge is and I think this might be peculiar to the manufacturing industry. It depends on how advanced the particular company is.

And so there are different levels of adoptions today. And that can also be challenging because you're never getting a complete picture of what's going on. And the final thing on a social side, I think I remember and I think most of us would know that when automation became a big thing 30, 40 years ago, there was this huge problem about people losing jobs, jobs are being taken over by automation. You can imagine what will happen with autonomous manufacturing. It's going to be even a bigger challenge.

Speaker 27

So I don't have much to add to that. I think on if the value that this new world gives to people is large enough, then there will be a demand, right? So for autonomous driving, if it works and if it's safe, people will just want to have it. And then everything else like regulation and so on, that will just have to follow. So for me as a techie, that is sort of a simple answer, right?

Do the engineering first and do it well, and the other things will have to follow. And so I think getting it to work and getting it to work safely, that is the main thing.

Speaker 17

So I always I have an example that I throw out. I'd be curious your reaction. I mean, part of the problem, for example, in autonomous driving is there are 1,000,000,000 cars on the road today. And a lot of those drivers are not good drivers. And there were already there were surveys done in the U.

K, which you might have seen, where drivers were asked if they saw an autonomous car, would their behavior change? And the survey said there was a very large percentage, I've forgotten, 40%, 50% said, if they saw an autonomous car, they would try to take advantage of it. They would try to bully it. They would make sure they got into the intersection You know, there are real problems associated with the initial adoption. So I've always said government can help on this.

It can create standards. And in the case, for example, of autonomous driving, the natural thing to

Speaker 24

do is to carve out a

Speaker 17

section of a city and say only autonomous cars. And I think a country like Singapore, for example, would be the natural place for this to start. And if you could have Singapore carve out 10 square blocks and then 20 square blocks and say only autonomous cars will operate, you'll build and it works, assuming it does, you'll start to build the confidence, you'll build the trust, you'll start to create the opportunity for people to feel comfortable adopting the new technology perhaps even faster. But in the absence of that, you're going to run into these behavioral issues.

Speaker 5

David, the question I always ask is, all of us sit on airplanes and fly, and it is always flying on its own. In the middle of nowhere, 30,000 feet above, we are happy to do that. So I think it's also a matter of getting tuned to the idea to some extent. I mean, today, I'll also sit on a plane without thinking twice. The same argument should hold there, right?

Speaker 27

I agree, though. I mean, we'll quickly adopt because it will work just like airplanes. But on the other hand, for driving, it's a heterogeneous environment, right? So that is different from the airplane. And so that notion of trying to take advantage of autonomous cars, then I can completely mention.

I remember when I was a student in the '90s, I went to all these AI conferences and robotics conferences. And that was the fun of any any conference, right? People are showing their robots and you trusted this and they would fall over, right? And it was just fun. And I think that's a natural reaction is to try to show that you're superior to the rover.

Speaker 17

And remember, airplanes have air traffic control.

Speaker 5

That's good.

Speaker 17

It's a little bit different. There is a centralized control.

Speaker 5

Today, today in a big way, but much different than when it started, right?

Speaker 17

And then they had people, they weren't driving, they weren't going by themselves, right?

Speaker 5

It'll evolve, yes.

Speaker 17

It'll evolve, yes. It just may take a little longer to be adopted on a large scale.

Speaker 11

And I think a big issue coming up is what we're all doing to the communication networks as you have to put in broadband to handle the big data or communication packets that you're sending through cell phone signals. As we get to terabytes of information going back and forth and you are, how is the infrastructure going to have to change for this to happen?

Speaker 5

Intel is figuring that out.

Speaker 11

Thank you.

Speaker 27

I think it's also not I mean, it's a topic for regulation, right, and maybe also set up of infrastructure. It may also be a topic just for all of these dynamics in socio technical environments, right? Because it may also just be a matter of politeness toward machines, right? So I could imagine that one way could be that you say, well, I'll take a bunch of it and I'll get I'll cut in front of this autonomous car. The other way could be that you say, well, there's an autonomous car.

It may not see me. I'd rather stay away and be a little careful, right? And you naturally give the right away to autonomous cars. And I think it's probably more empirical that we'll find out how this

Speaker 5

thing works. It was

Speaker 17

only a British survey, so

Speaker 12

Americans will be more polite. But

Speaker 5

I want to touch on the topic that you just mentioned, the data explosion. I mean, data is going to explode. I can tell you the manufacturing industry, if anybody has seen a simple, what we call a drivetrain, a motor, a gearbox and a load with a variable speed drive can generate 2,700 to 3,000 data points. And that's just one asset, could have 10 of those, 100 of those. Therefore, I think one way this is going to be addressed and one way autonomous manufacturing will also come into place a lot more of edge analytics and edge computing.

And not just and when I talk about edge, it's not just alongside assets, but in the manufacturing environment. But also, you'll start having you have to start having computing at different points before you reach the cloud. And I think that is really going to be kind of solution to the data explosion.

Speaker 11

And the ability also to operate within the clouds, hybrid cloud, proprietary systems and do your analytics at the same time and understanding it's not just an analytical problem, it's also a software and a technology problem.

Speaker 5

Right. And you want real time, near real time analytics. That's the other thing.

Speaker 13

Yes. In the case of the energy sector, right, you don't often have the luxury of an Ethernet connection out in the middle of a shale field in some remote portion of the world. So you have to rely on all that AI capability to be locally placed right where the decisions have to be made in order to really make that work and to have the low latency requirements so that you don't damage the equipment as well. So how is that going to affect your industry, do you think?

Speaker 11

It's going to affect it quite a bit. So you'll get towards that analytics in the fog. How can you get that decision done in milliseconds? If you operate it similar to a smart city or smart grid, how do I connect those parts and those pieces and make an intelligent decision in milliseconds? And we're getting there, but you're also having to say, should I do it you're also adding Hadoop clusters and saying large amounts of data here interacting in memory and database here, putting them all together, putting it back to actually tell the PLC to take an action at the oil field.

So you're going to have to get high performance computing, high performance analytics in cloud, perhaps in cluster in the Hadoop.

Speaker 13

And ultimately, though, this is going to have to drive some sort of business or societal value proposition. In these other industries that we've been talking about, how do you see that value proposition manifesting itself?

Speaker 5

Yes, sure.

Speaker 12

I mean,

Speaker 5

manufacturing industry is all about, if I may put it very simplistic, profiteering. And also, of course, generating the goods and products required for a comfortable life. I think the last I remember, the global manufacturing is close to $12,000,000,000,000 or something like that. And I always think about

Speaker 28

Hello, everybody. My name is Tony Salvador. I'm a Senior Principal Engineer at Intel. I'm a social scientist. I'm one of the ethnographers.

It is my honor, my honor to get to know to get to have known these people in the last couple of days. And I say that because each one of them is doing work that is important and work that goes unvoiced. It's work that gives voice to people who have none. It's work that gives power to people who have none. It's work that gives strength to people who have none.

They will each talk about what it is that they do. They will talk about the hopes they have for AI, what AI can do to help them do their jobs better to help the people that they're trying to serve. Each one of them has a passion behind what they're doing and a reason for why they're doing it. So just very quickly to introduce them, this is Sixto Cancel. He's got a non profit called Think of Us that helps foster kids work their way through the system.

He'll describe about that. This is Michelle Dillon. She's a Senior Vice President and COO of the National Center For Missing and Exploited Children. You heard John Clark, the CEO speak earlier. This is Ina Fried.

We have a little joke going back there. This is Ina Fried. She's a senior editor with Recode, recode.net. And this is George Siemens. He's a professor at the University of Texas at Arlington, not to be confused with Texas A and M, which is also in Arlington.

Each one of them will actually talk a little bit about what it is that's really driving their passion, why they're here and why they think AI is important. And we'll just start right here.

Speaker 29

Hello. Good evening, folks. That's the part you just kind of say something back. There we go. Oh, afternoon.

There we go. There goes the confusion. So my name is Dixdell, and I'm the CEO and founder of Think of Us. And we seek to leverage data and technology to really help young people in the foster care system heal, develop and thrive. And so we've been developing this platform that coaches that allows young people to build their own personal advisory board so that those adults in that young person's life can coach them through their adolescence and then to be on their own.

And so when I think about the power of AI and I think about the possibility of AI, I get really excited because I see it as the ultimate assistant tool to young people figuring out how is it that I go from being an adolescent in the foster care system and that a pathway is shown to me of several different pathways are shown to me of how is it that when I turn 18 and I have not found a family, I have not been adopted, that I have not only built the skills that allow me to be self sufficient, but also that I have chosen what my future is going to look like. Right now, we have about half a 1000000 young people in the foster care system. And unfortunately, when you are in that system, you do not know what's going on. You do not have a lot of say in your case plan, what happens to you. And so when I think of AI, I see it as the ultimate advocate, the ultimate empowerment to young people in these systems, being able to get the information that they need, so that they can access the services that they need, make the plans that they need to, and really focus on healing, developing and then thriving.

Thank you. Michelle?

Speaker 30

That was a good one. Michelle Dillon, you heard my boss earlier speak, the CEO of the National Center For Missing and Exploited Children. Very, very happy to be here. And thank you, Intel, for hosting us. The National Center For Missing and Exploited Children, we're a 32 year old organization.

We're a nonprofit, nongovernment, non investigative organization that exists to serve families and law enforcement and all of those stakeholders who are looking to protect our children. We have 3 primary focuses and missions. 1 is to find missing children. As John mentioned earlier, we've been very successful. We found over 232,000 missing children and brought them home to their families.

We've done a remarkable job in that area, which also that was the beginning 32 years ago founded as a result of a tragic abduction and murder of Adam Walsh. In that time, over the 32 years, we have seen the scope of the problem grow. In addition, we've seen the advent of the Internet, and that has brought a whole host of other issues along with it. Much of what we're going to be doing with Intel with our new partnership is focusing on how we can use AI to fight exploitation. We have the cyber tip line, which is receiving this year 8,000,000 reports regarding children who are being sexually exploited online primarily.

Much of that is regarding individuals who are trading child pornography. We have 25 analysts working on the CyberTipline who are trying to handle 8,000,000 reports. Clearly, as was mentioned earlier, looking for the needle in a stack of needles. And in this case, every needle is a child. And we need to do everything that we can by leveraging AI and leveraging technology to stand out what is important to help us prioritize those leads and make sure that nothing is being slipped slipping through the cracks simply because we're relying on humans.

We need to be able to teach the machines to do much of what the humans are doing. And we're thrilled to be here and be working with Intel.

Speaker 28

Thank you. Ina?

Speaker 3

So my day job is covering this industry and I've been covering tech for about 20 years now. I started at CNET. I've been with Kara Swisher and Walt Mossberg for about 6 years now since we were all things D and part of The Wall Street Journal. We went independent. We got bought by Vox Media.

So I write about this industry, but also Recode and Vox are partners with Intel on the hack harassment effort, which is really personal to me as part of the LGBT community and having been a crisis line volunteer a couple of times in my life for LGBT lines and an LGBT youth line, the amount of harassment that particularly LGBT youth get at a time when they're very vulnerable is a particular area of passion, as well as just seeing online harassment in general, whether it is women getting harassed for speaking out, whether it's people of color being harassed and certainly, and we may talk about this in the environment we're in. I think that's only increasing and I love what Doug said at IDF. He was saying that technology created this problem and it's really up to us as an industry to solve it. And certainly in general, we chronicle the industry more than getting involved. But harassment is personal to us.

A lot of our writers at Vox come under attack for their political beliefs, their sports teams. I always joke, I've faced a fair amount of harassment in my career, mainly around 2 areas, a small amount for being a transgender woman and a large amount for covering Apple. And I get a lot of just, you know, the craziest hate ever. I'm less concerned about the systemic issues around the last group. But I think AI hopefully can create help us to create an environment where simply being a woman and talking about being sexually assaulted would not be an invitation to harassment the way it is today.

I think anyone who puts themselves out online is likely to face significant harassment and people choose to speak out nonetheless, but that shouldn't be the choice that anyone has to make to tweet to speak out online.

Speaker 28

Thank you. And George?

Speaker 1

I'm George Siemens. I'm with the Link Research Lab at University of Texas Arlington. We have a guiding focus in our research activity, which centers on what does it mean to be human in a digital age. And a number of areas we look at relate to work and automation. It relates to the experiences of success for all students.

And we're also focusing on future knowledge systems. What does a society look like in an age of AI? And how do we learn in that kind of a setting? And a few things that are quite prominent is we are likely the last generation that will be smarter than our technology. And what are the social implications of that?

We've also seen and this has come up on numerous occasions on different panels and next 5 plus years. So there's enormous social issues that are around it, but not even just the social issues, it's the learning needs that exist within that kind of a setting. Meaning instead of a 4 year relationship with a university where we go to school, get a bachelor's, we end up with something that likely looks more like a 40 year relationship with the university. And what does that look like when we have this system that is prevalent in our lives like the health care system is in our lives today, literally from birth to death, where we're going to be essentially learning beings. My primary issue with that is not everyone has access to that system today.

And less and less people are having access to it that actually need it most. If you look at the completion rates of college education based on income quartiles, those individuals that in the 1960s were sitting at around 20% access to the higher education level in terms of income, They've actually dropped slightly. The top quartile has gone up from around the 50% to 60% level in excess of 90%. So basically, if you're born poor in America, don't count on education to lift you out of that poverty, even though that's a narrative that we have. So I'm very interested in what are the technological systems and how do they need to integrate with our social systems, so that we can have those students be successful that right now are largely excluded from system.

In fact, the primary system that we know for elevating people out of poverty is now inaccessible to those that are in that position.

Speaker 28

Thank you. So thank you all. Actually, you bring up a point right at the end there that I'd like to sort of lead off with, if you will. It's that notion of the technology solving a fundamentally human problem. I think each of you talked about things that are really fundamentally us caring for one another in one way or another, right?

Whether it's somebody who started off with a bad lot in life or somebody who has something to say and is being pummeled as a result of it. How do you start thinking about that integration of technology with what are fundamentally human social problems? And maybe we can start back with you again. And then I'd like to move to Michelle to think about that, because you have some examples, I think, in how you're using technology and people to solve some specific problems.

Speaker 1

Well, I think at this stage, we're very close to getting into a level where we can say really dumb things that 5 years down the road will be watched and will be made fun of by colleagues. So I think it's always difficult in the AI space to forecast even a few years down the road. But I do think that

Speaker 6

we need

Speaker 1

to turn, 1st of all, to the essential learning sciences literature and the effects that have the greatest impact on student success. Now this could be a young student in the K-twelve system, it could be someone who's in higher education, it could be somebody who's been in the workforce and needs to come back to refill. So I think in all of those situations, the human factor is so central to our success academically. And many of us have stories, I'm sure, of you know, missed my life or someone went out of their way to come for me and it changed my life. And so one of the areas that we look at as a lab now is we're very interested in wellness and the human condition.

I think cognitively, we need to just say, look, damn it, computers have won. We can't out cognition our computing

Speaker 5

technology anymore. And that's

Speaker 1

only going to accelerate that gap. Technology anymore, and that's only going to accelerate that gap. I think what's left, the last domain of humanity to stake its claim is actually on the feeling, the emotion, the awareness, the affect and those elements. So our systems of lead to account for the things that technology does far better. I think it's always difficult in the AI space to forecast even a few years down the road.

But I do think that we need to turn, 1st of all, to the essential learning sciences literature and the effects that have the greatest impact on student success. Now this could be a young student in the K-twelve system, it could be someone who's in higher education, it could be somebody who's been in the workforce and needs to come back to refill. So I think in all of those situations, the human factor is so central to our success academically. And many of us have stories, I'm sure, of missing my life or someone went out of their way to come for me and it changed my life. And so one of the areas that we look at as a lab now is we're very interested in wellness and the human condition.

And I think cognitively, we need to just say, look, damn it, computers have won. We can't out cognition our computing technology anymore, and that's only going to accelerate that gap. I think what's left, the last domain of humanity to stake its claim is actually on the feeling, the emotion, the awareness, the affect and those elements. So our systems of lead to account for the things that technology does far better than it changed my life or someone went out of their way to come for me and it changed my life. And so one of the areas that we look at as a lab now is we're very interested in wellness and the human condition.

And I think cognitively, we need to just say, look, damn it, computers have won. We can't out cognition our computing technology anymore, and that's only going to accelerate that gap. I think what's left, the last domain of humanity to stake its claim is actually on the feeling, the emotion, the awareness, the affect and those elements. So our systems of lead to account for the things that technology does far better. Cognitively, we need to just say, look, damn it, computers have won.

We can't out cognition our computing technology anymore. And that's only going to accelerate that gap. I think what's left, the last domain of humanity to stake its claim is actually on the feeling, the emotion, the awareness, the affect and those elements. So our systems of lead to account for the things that technology does far better

Speaker 29

better to be able to access the Internet and see kind of these stories and understand that there was this very slight chance, this possible pathway to be able to leave Bridgeport, and I saw someone do it, that encouraged me, right? And so when I think about what is the role of AI, at least in youth development in our work, it's to be able to say there is a couple different hands. The thing about all of us is that we're all born with different privilege. You know, privilege of being a male, white privilege, you name it, we can go in for days, right? And so you may be born with 12 cards and this person may be born with only 3 cards to play.

And this person with 3 cards to play still has a pathway of winning the game. It's just the person with 12 cards has a better chance of winning the game. So the AI can help assist young people really engage in the right developmental opportunities, engage in the right steps to be able to play the cards that will get them to being self sufficient, to being to employment later down the road. Right now, what you have is a group of people beyond just foster care, who, you know, they don't see that pathway. And I think that, you know, what we've learned in the last week is that there's a whole group of Americans who are hurting both young people and adults who do not have that type of sense of this is where I want to go.

And I think that artificial intelligence can help us. And a very practical example of that is, for example, when on our platform, when we have a young person creating a budget, and the AI, you know, in the future being able to understand that budget and provide recommendations on specific goals around their money habits so that they're not, you know, going out for those pair of Jordans or PlayStation, causing homelessness. So it's things that we deal with.

Speaker 28

So kind of like we heard before about personalized medicine, you're thinking about almost a personalized pathway for people to navigate through the system. It provides some sense of encouragement for them as well, and it helps the foster care system. When you're looking for missing or exploited kids, how does AI help to encourage actually the really hard work that you guys do? Looking for a missing child or an exploited child is not all fun and games. It's hard, emotional work.

How does AI help you? How does it help you do your job better so that those kids are served?

Speaker 30

Well, we're just scratching the surface at this point with AI. We do have some individual tools really that stand alone. What Intel is helping us do is integrate those that the analyst or the user is able to utilize the different tools in a much smarter way. At this point, we have many visions for what Intel is going to be able to help us with AI. Specifically, one really concrete goal that we have is decreasing the amount of time it takes for a report of a child being views to come into us to go out the door.

The volume is just ridiculous. Again, we're a non profit. And every year, the numbers double. Last year, it was 4,000,000 and we were pulling our hair out. And the year before that, it was $2,000,000 Now it's $8,000,000 And we see no reason that those numbers are going to go down.

And where we're looking for AI to be able to help is right now you have many analysts who are taking reports as they arrive, looking for information that may indicate that this one is on a higher risk to a child than another, trying to put those in packages and make the reports available to law enforcement in over 100 countries. While we're a U. S. Company, we actually serve the entire globe because you have individuals explaining children from everywhere. One of the concrete goals of what we really feel will be a huge success will be if we're able to take these leads that come into us, be able to prioritize using AI, have the system see things that I'm not going to see.

We talk a little bit about gut and where something just feels like this one might be more important than the other. We don't know how to define gut. We don't really know how to define why this seems like it may be more important. But if we can teach an algorithm, if we can teach AI what to be looking for, really a learning model with feedback, be able to identify as cases are coming in, which ones need to rise to the top because we don't want to make our jobs easier and say, well, great, took us a day, here you go and kick the can. We need to be able to make these reports better and prioritize for the law enforcement agency who's receiving them.

If I'm putting 300 reports on your lap this week, it'd be very nice if I could tell you which ones have the highest likelihood of a real child being abused. Those are concrete things that we're going to have to learn how to translate gut, translate clues that are there that we just really don't know how to identify into a tool that can really augment the human effort making a difference.

Speaker 28

Again, sort of the idea of swarming almost, of being able to swarm around an idea with a particular set of technologies. I saw an article earlier that there was a student, I think it was at Baylor, who had been harassed. And then 300 of her co students actually gathered around her to escort her to class. And I'm wondering, is there a way of thinking about how do you think about that kind of support in an online setting where somebody is just trying to voice an opinion or actually sort of reveal a little bit of an impression or reveal some way that they're not being treated fairly? How do you think about that?

How does AI help us? Or could it or maybe might it or what's your hope for it?

Speaker 3

Well, I think you point to what AI can't do, which is AI can't be that human support system. I mean, hopefully, AI can do things like recognize hopelessness or suicidality online. There are areas where AI can help or provide resources at the right time. And I think social networks are getting a little better at that. Where I think AI the hope for AI around this is to identify hateful and harassing content.

So far, social media is not doing a very good job of policing itself. Even companies that say we want to do a better job, too much is slipping through the cracks and it's all reported after the fact when the impact has likely been done on the person being harassed. It's still beneficial to remove that content, but it's probably had its primary impact on the primary person already. It's important because the continued existence of it has a continuing effect on other people that might read it and then be less likely to speak out themselves. I think one of the things and the hack harassment group is trying to work on this is, is it useful and helpful if an algorithm can recognize before something is posted that it might potentially harassing, is it useful to notify the person about to send that tweet, hey, this could be perceived as harassment, are you sure you want to send it?

1, can the technology get good enough to identify it? Can it understand things like sarcasm and playfulness? 2, does it have an impact to share that with a person or do people actually know

Speaker 16

whether they're harassing? I

Speaker 3

don't claim to know the answer to that. And I think the other piece is, is there a way to do it? And one of my favorite examples, and this was human moderated, but you can see where it would go into AI. When I was at CNET, we had a way we did have comments on our site and we had a way to block somebody. And the most effective one that we had blocked them so that their comments still showed up on their screen, but they didn't show up on anyone else's, which I thought was brilliant.

This online troll somewhere in their little cave making their nasty comments. In their mind, their nasty comments are right there and nobody else has to see them. And if you asked me what the world could look like with AI, I'd like to see a world where the AI is sort of ranking people. And on eBay, you know who to buy from because it's an online community and things are ranked and rated. And I think computers could have a role to play and really identify, this person has only been on for 3 months.

They've never said basically making sense of whose comments are constructive and not. And obviously, when you're talking about speech, it's a very tricky area and we don't have time to get into it all of it. But an unpopular opinion isn't necessarily harassment and how do you dial the knobs, it's a very tricky problem.

Speaker 28

Well, and I think that's exactly where I want to go next. I think that on for we'll start over here at this end. You asked the question of what does it mean to be human in this digital age? What does our humanity actually mean to us, right? And especially if we all 4 of you are talking about having different relationships with the technology.

And actually, I just noted on your sort of profile page on Recode, you have a pretty extensive ethics statement, right, the companies you work for and don't work for and that sort of thing as you're reporting. I'm kind of wondering about what you think about the sort of the ethics of what it is that you're talking about. Where do you think it's going to go? Where does it have to go? Recognizing protected speech versus harassment speech is an important thing.

But we don't we have that in our living daily lives, but we don't have it particularly well on our digital lives. How do we start thinking about those kinds of issues? I mean, we'll start over there and then move back this way.

Speaker 1

Well, I think one of the scope of the challenge of integrating AI into our cultural and social systems is vastly underestimated right now. Like it will be a stunningly complex contentious challenge. And the reason it will be such a big challenge is that technology makes things explicit that are ephemeral in physical space, meaning we have a conversation here regularly, it would be done, but it's recorded now, it's available, which means it can be analyzed, you can code and tag different parts of the video and you can do natural language analysis of the conversations and conceptual understanding, a number of other things. But that's only possible once it's been rendered in an analyzable format. So that's what we have to face now.

All of our assumptions, our legal system, what weight, which a judge would have as an implicit bias, that now has to be made explicit. What weight do we put on race as part of a sentencing recommendation? When we have an individual that goes online and that has a little warning that comes up that says maybe you shouldn't post this, There's a number of things at play that have to be identified. So everything that is now ephemeral and intuitive has to become mechanistic and explicit. And we have to do that cover to cover.

Pick one aspect of society from school to our social cultural institutions to our healthcare system. And it's going to really make us confront ourselves because the implicit bias that we all function under has to be surfaced and confronted head on. I just want to emphasize that's probably the biggest thing that I think we're not understanding in the AI conversation is that we have to have a long, deep soul searching discussion with ourselves and what it means for us to be human in an age where we can pass a lot off to technology, but there's a lot that we probably shouldn't.

Speaker 28

Yes. And the way you're talking about it, there's a relationship that you're building between the technology and the people and the people and the technology. I think the way that you guys are talking about the technology, it's almost like a helpful tool, an additional tool, something that is it provides a personalized pathway. If you have just enough information, it helps this one kid go through or it helps your analysts identify, okay, that's where that kid is,

Speaker 8

right, that kind of thing.

Speaker 1

Tony, can I just throw back that a little bit? I mean, is there one area in society where alpha where technology doesn't become the alpha? Like if you think about over a long period of time, technology always takes the alpha role.

Speaker 29

And I think that's interesting. But when it comes down to healing trauma, I don't see technology, AI being the thing that helps heal the trauma. And that's why I think in our case, it's a tool because it's that human connection, that rewiring of the brain that happens from that interaction with the human that causes the healing. So that's why I think in this, it may, but it may not.

Speaker 3

Yes, I'm with you in terms of, I think it's going to be very tricky. I mean, I don't even think we could tell the algorithms today. If I went around this room, maybe even if I just went around the 5 or 6 of us, I don't think we'd totally agree on what harassment is. I certainly know if we go in the room, we wouldn't and definitely if we go in the culture as a whole. So I think at best we have to it's going to be a very evolving role.

It's not just the technology. It's not just a technology problem. Like I think technology can help, but I think certainly if the events in the last week have made me think about anything, it's like what would the government consider harassing speech? Well, I think I know what the current government would consider harassing speech. I think the next government might have a very different view that might differ greatly with some of the voices that are being harassed.

So I think there's a long way before technology is able to solve these problems. And even what I think we're more realistically talking about where technology can be a tool to help solving them, it's still tricky and it's not just a matter of Moore's Law.

Speaker 30

So it's an interesting intersect where technology or at least from my perspective talking about child pornography, which is really a bad misnomer in the sense that we're talking about child sexual abuse imagery. But it's an interesting intersect where this crime has existed for a very long time. Technology changed the face of it. So we've spoken with many victims and many of their families. And to hear the impact that technology has had on their healing is devastating that individuals who experience a horrible traumatic individuals who experience a horrible traumatic experience of being abused then have somebody photograph and memorialize the abuse and then share it online.

That abuse never ends. That abuse continues every time that image is sent, every time somebody downloads the images, every so at the same time, technology has compounded this horrible event. Technology is also the key to ending it. So it's this very strange circle that we found ourselves in, in terms of utilizing new ways that the Internet and basically all digital presence, what can be done in order to reduce the further trauma and allow for the healing of these particular children because the problem existed before and technology has made it worse and it's also a big part of the solution. So it's a challenge.

Yes.

Speaker 3

I mean, I think that's the parallel is the problem has grown significantly. Harassment, bullying, those things existed in the non digital world, but their ability to magnify the ability of negative actions to have these outsized longer lasting impacts is tremendous. So it certainly greatly magnified the problem. I think we're both hopeful that it can also speed a solution.

Speaker 28

Well, I think and I'd like to go towards that end. If I think about George, you taught in the 1st MOOC, I believe, right? The 1st massively online I don't remember what it stands for now. Massive Open Online.

Speaker 5

That's it.

Speaker 24

Of course it is.

Speaker 5

Thank you.

Speaker 28

I just got it. Once something becomes a little term, it becomes a term and that's it. You taught on the first one of those. And what you were doing, I think, was trying to make learning available to a wider array of people. What kinds of things do you think, and this is also here to you, what kinds of things do you think that people have to start learning so that we can actually start to address these kinds of issues?

Well, Part of the challenge,

Speaker 1

I think, is the infrastructure isn't in place for the kind of reality we're starting to inherit. Meaning there's a lot of legacy pressures that we're bringing into this space with us. I'll give you an example. When I was at Red River College in Manitoba, this was in the late '90s, and ours was the first college in Canada that went exclusively laptop. And the so the technology came in and did the work of the old for the teacher.

So instead of the

Speaker 8

projector slides, we had PowerPoint. What changed on

Speaker 5

that half of the

Speaker 1

difficulty that we have is that the legacy inheritance of existing habits that can now be addressed with AI, we're not clear on what those might be. And I'm worried we're going to use AI to just keep teaching the way we've been teaching. One of the things I wanted with the first MOOC was to increase the capability of people to own their own learning. I felt that self regulation, self ownership, self identity was critical and it shouldn't be sort of given over to a faculty member to drive and shape for you as an individual. Now as a result of some of those actions and activities, there's a whole set of technical skills that need to emerge.

There's a number of infrastructure parts that have to be addressed. A lifelong portfolio of learning, we're developing a personal learning graph right now that's an attempt to have a stable identity that you own for life that captures what you've learned formally and informally. I mean, that's something that has to happen. We have to have a school system that stops thinking in its current timelines, a university that stops thinking in 3 credit hours and instead says, what do you know, George, and we'll feed you content that fills the gap in alignment with the degree that you're seeking to pursue. That's not in place.

Speaker 28

No. And it sounds very similar to the kind of thing I mean, you're nodding. I think it sounds similar to the kind of thing you're thinking about, a personal pathway, a graph, you called it?

Speaker 29

Yes, I like echo everything you say. I think you drove the point home. But yes, I pass it on because you wrapped it up really nicely. So then

Speaker 19

I think we have a

Speaker 28

couple of minutes left. I'd like to draw one thing I think that I heard just today in listening to you. It's that it's not only a human problem or human problems that are actually being resolved. But there's a notion here in the work that you're doing and your hopes for AI that you're trying to sort of increase the kinds of compassion that we have for one another, not just in our own families, but across time and space, right? And there's something there, I think, in what you're doing, thinking you'd be like AI is helping kids go through a particular program or helping find missing kids or helping stop harassment or helping people learn, which is probably one of the more compassionate things that we do for each other.

I guess in the last couple of minutes, if you each had a little thing to say, how do you think that technology can help to increase compassion?

Speaker 29

So when I think of AI, I think of many different roles it can play. But the first one being the ability for it to discover, discover that pathway, discover something that we didn't know before, provide an insight. Then the second most powerful thing I feel like is the ability to intervene, right? So I lost 2 of my siblings and I very much blame the failed systems, right? No young person should be able to grow up in a country that promises them that if you work hard enough, if you do your part, if you go to school, that you can be successful, and then have to die in their early 20s before that was even a possibility.

So to me, that point of intervention is so critical that AI has a role there. And we see an example in New York City with a young man, a young boy, 6 years old, came to the attention of the child welfare system 6 times and it was not and no one in upper management was alerted and that child died, right? There could have been some intervention. And then the last thing is I think that when I think of the power of AI, it's the ability to have that systematic change based on the data. So discovering that intervention and then using all of that learning to figure out how does this create the new system.

And it does something positive.

Speaker 30

Sure. And I think in terms of where technology can increase compassion is technology has the reach, technology has the ability to communicate and reach into corners that we never would be able to. And to tell them stories that they would not otherwise be aware of, whether it be children who need homes, who need good loving homes, whether it be in our case a situation of children who go missing or children who are sexually exploited, which most people are saying either that happens to somebody else in another country. For us to be able to communicate not only that the problem exists here in our backyard, literally in our backyard, but also that there's something that people can do because awareness empowers people to protect the children near them. So if I can happens here too.

Speaker 28

The stories we tell ourselves matter. Yes. And then just finally right here really quickly.

Speaker 3

Yes. I mean technology's ability and I think we're talking mostly about AI, but virtual reality is one of the technologies I look at as a real tool for empathy, where you can literally step into someone's shoes. There's this online movie Clouds Over Sidra that Chris Milk did that literally puts you in a Syrian refugee camp and you it's a movie, but you feel some of that confinement. I think one of the keys to stopping people from harassing is

Speaker 28

making it clear that there's real people involved. There's real people and there's real pain. But there's also real possibility of doing really good work to get over all of that and maybe make us a better society overall. Thank you very much, panelists. Really appreciated a hand.

Our time is up. We have to go.

Speaker 22

So one last round of applause for this great panel. I think we've all had a great and insightful and challenging panel to wrap it up. I hope that this was useful. On behalf of Intel, we want to thank you for attending. There's a reception with snacks and drinks out in the lobby.

Powered by