Microsoft Corporation (MSFT)
NASDAQ: MSFT · Real-Time Price · USD
410.17
-14.29 (-3.37%)
Apr 30, 2026, 9:31 AM EDT - Market open
← View all transcripts
Partnership
Nov 4, 2015
Good morning and thank you for tuning into our state of AI live stream with Microsoft. My name is Brent Breslin. I'm the Senior Research Analyst with Piper Sandler covering the cloud software and analytics space, tuning in from Bend, Oregon. I'm joined by Harsh Kumar, our Senior Research Analyst covering semiconductors, tuning in from Memphis, Tennessee. We're very pleased to have 3 distinguished speakers from Microsoft joining this morning.
Doug Berger, the Technical Fellow and one of the leading active researchers in computer architecture. Welcome, Doug. David Carmona, General Manager of Artificial Intelligence at Microsoft. And then Jonathan Nielsen is a Finance Director in Microsoft Investor Relations. He is online, but not live streaming at this point in time.
Welcome Team AI at Microsoft.
Thank you very
much. Thank you. Thank you, Brent.
Before we dive into Q and A, maybe let's start with a brief description of your background, kind of role at Microsoft and maybe what location you're joining us from this morning. Doug, why don't you kick things off here?
Sure. Good morning, everyone. And Brent, thank you for giving us the opportunity to participate. I'm Doug Berger. I am a longtime formerly a researcher in computer architecture.
I was a professor at the University of Texas for 10 years running a research group there. And we built CPU processors, new types of hardware architectures in silicon. I moved to Microsoft in 2008. And they started up a research group in computer architecture in Microsoft Research. And my team And I built a number of new things, one of which was the Catapult FPGA platform, which forms the basis of Azure's smart networking and accelerates applications in Bing and it's that was actually a large scale distributed system that we built, not just The use of the chips.
And then about a year and three quarters ago, the company asked me to move most of my team into Azure To really start a new direction as our cloud grows and hardware innovations and full stack innovations become more and more important. And then more recently, I've taken on a more active role in AI architectures and systems. And so that's really my focus now is AI infrastructure. At the hardware and low software level. And so that's what we've been doing.
Great. Very good. David?
Yes, connected from RedMon. Actually, I'm connecting from probably a short walking distance from the office, But it's useless these days. So I could be anywhere, but yes, I'm in Redmond. I joined Microsoft almost 20 years ago. And my background is Software Development.
So I was attracted at that time, Microsoft was very famous for Windows, right? But I was actually attracted to Microsoft because of The developer tools, which is like the DNA for Microsoft. So I held a variety of roles AllConnect is handled in engineering, handled in business, always connected around the developer business. So I used to lead the business for Visual Studio, obviously online, Azure tools and so on. But then like 5, 6 years ago, I noticed that something was changing In the industry and is that there's a new software and that is called AI.
So I made a transition to AI and now I lead the AI and innovation team In the business side in Microsoft.
Very good.
Brent, I'm sorry, I forgot to mention. I'm in Bellevue, probably just a couple of miles away from David. I run M and A Campus.
Well, that's the power of teams, right? We can all do this meeting virtually. So very, very cool. So let's dive into the kind of current state of AI to kind of frame the discussion here. If you reflect back Over the last year, what would you say were some of the kind of landmark AI innovations that occurred either across silicon or systems or these new class of AI models.
What were the landmark things that investors should kind of pay attention to in the last kind of year? David, go ahead.
So let's just start with the technology. So just recapping on build. So let's go to last week, Right. So last week, we were I'm sure that we'll go in more detail today. But one of the key things, Sanit, it Fluently a bit under the radar, so you have probably heard the term of the ASO supercomputer that was announced last week.
The change behind that, so the implications of that are Far beyond hardware. So the key thing that we saw with this AI supercomputer is that you can train massive models. And by massive, I mean in the billions of parameters where we were in the order of the millions of parameters. So that which Can sound very cool that, yes, you can get better models, more accuracy. What it really meant, and that is the key change that is Certainly something to be on top of is how it's changing the way that you develop AI.
So AI as a platform was something that was not clear before. So people, companies will have to develop AI models on a case by case basis. What we realize with these massive models is that they can be multitask, They can be more generic than the traditional very customized vertical models for specific scenarios. So that will bring a very interesting motion in the market, Which is companies embracing these massive models and then customizing those models for their particular scenarios and tasks. Therefore, bringing like state of the art AI to more companies.
So that's certainly one thing that I could highlight To be on top of. And Doug, I'm sure that you can provide more in there. But before that, let me just mention another thing also from last week that I think is It's very important. I say concept of autonomous systems. So we have this everybody that is associating autonomous systems with autonomous driving a lot of cases, But we're forgetting that autonomous systems are much broader than that.
And last week, what we announced at Bill was our first preview, so every developer now have access To that, to our autonomous systems platform. And it goes beyond motion control. It not only targets robotics, but things like process optimization, Things like machine calibration and a lot of scenarios that are much more real in the current situation for companies to address. So that's another one that I would Doug, I'm sure that you can add on top of both a lot.
I don't know. That was pretty that was great. Very, very complete. Let's see. So a couple of things I might underscore with the in particular with the AI supercomputer announcement is really what David said that the Size of the models has been growing much, much faster than I think any of us predicted.
I mean, it was growing, But now, and just a round number is about 10x per year, which is a growth rate that Far surpasses Moore's Law while it was active and we all saw how disruptive that was over 50 years. And so those giant models, What we've seen so far is that the bigger we train them, the more they can do. And that breakthrough towards Being able to train semi supervised, especially with transformers in natural language, about a year and a half ago really turned The field in that domain from a data bound problem into a compute bound problem, which is why we're now racing to build these supercomputers So that we can train these models in a reasonable amount of time, days, weeks, months. And it is very, very disruptive. When we look at the capabilities these things are able to drive, not just in terms of language capabilities, but like David said multimodal.
And what we're seeing is I think a very rapid infusion of AI across our product families. Now obviously these supercomputers are expensive. The infrastructure to do them is to train those large models is very different than what people have built in the past. And then once you train them, you have to serve them. And if the models are giant, serving them in real time is also challenging and can be expensive.
So I think that it's a transition that I really don't think the industry has grok yet or appreciated How fast this is going to drive the capabilities of AI and how different the infrastructure needs to be with these 10x requirements.
Before we go into the architecture, because it does sound like there are some unique things you're doing to create that AI supercomputer. I want to go back to the comment you made around just the size of the models kind of changing. Is this at the bleeding edge where it's kind of 1% of use cases or when you think about the size of these models and moving from kind of data bound to compute bound scenarios. Is this more prevalent than maybe we appreciate as outside looking in?
Yes, I can
take it. So there are 2 use cases in here that I think we have to separate very clearly, right? One, which is that 1% It's companies that will be creating those massive models and we don't believe that, that will be every company in the planet. But the interesting thing here is not actually Building the model is that these models, as we were mentioning before, they are more generic. They are multimodal.
They are multitask. So you can Centrally train those models, but then you can have companies with no such deep expertise or access to these compute capabilities, Customizing those models for their own needs. Now to give you an example, that's the way that we're actually doing in Microsoft. So we transitioned from having Product by product AI approach, so you have Office creating their own models, you have Dynamics creating their own models, you have Bing creating their own models, To having a culture where we centralized decoration of these massive models and then these teams can customize them. So we have features now in office Like auto reply or like document summarization that are coming from customizing this model without requiring That's massive infrastructure and expertise.
So that's the two angles that I would separate that.
And David, as you think about going from silo to centralized larger models, is that accelerating the feature? Like what's the big benefit? Is it the time to market with new AI functionality or is it more advanced AI functions?
So I would Say it's several, right? So the first one, let's go 1 by 1. At least I could mention 3. So the first one, super obvious, is skills, Right. So these are state of the art models that now everybody have access to.
So you don't have so for example, you don't have the same level of almost, I would say, researchers, PhD, creating those models because you can reuse it. So that's one super important that will say a lot to the democratization of AI in the industry. The The second one is data. So because these models are trained initially with huge amount of data That is again central. In our case, we're trending with all the index web in Bing, right?
So huge amount of data that we're adding in there. Then for customizing those models, you don't need that big amount of data. So for example, in the case of Dynamics, They custom train that model within additional data that are coming in their case. They do things like customer churn identification. So they only custom train the model with that delta, with their domain, right?
So very, very interesting. And then the third thing that I would say is compute power. So because the customization is using a technique called transfer learning, you don't need to train the model from scratch. So you don't need that Huge amount of infrastructure required to train the model initially. Doug, am I forgetting any other thing that you could highlight?
I think that was That captured it very well. I would add one point. You asked about the model size and you can think about The largest model that you can train and in the billions and we talked a little bit about that in our announcement, Kind of view that as the front of the bow wave, that's on, I wouldn't call it research, but it's really pushing the envelope In the systems and the algorithmic techniques and the data management, and then you find out what those models, what capabilities they have. And if you take one click down in size, that was radically large 6 months ago. And those models are now being used just kind of throughout our infrastructure.
So it's not I don't really think of it as a bimodal distribution, where there's like a 1% thing And then there's everything else. The whole space is just moving very, very fast. And there's the new thing, the biggest thing at the edge And then things that were huge 1 year ago, kind of now empowering all the products. And if you think about how fast this is happening, And I think in the notes, I mean, it takes you years to build custom hardware, silicon and then build the systems and deploy them. And so if you need to make a major change in your hardware platform, that's years before you can do it.
And yet This huge shift is less than 2 years old and we're still pushing the envelope on the rate of growth. Yes. And so we're kind of doing whatever we can with whatever we can get to drive the models up. And we've Fortunately, we had made a lot of investments in the past that we're benefiting from now. And a second comment I would make, if you think about Microsoft from a strategy perspective and this goes on to something that David mentioned.
We have a very successful public cloud with infrastructure that benefits our customers. We also have this massive PaaS and SaaS business that really empowers Our enterprise customers and consumers frankly with Office and Teams and changes there Like David was saying, can really reach so many people and transform their business. And then those infrastructure changes that we make To get those models out to such a wide group of users that see value, we can then take to our infrastructure for our infrastructure customers to use. And so those 2 halves, I mean, I think that's why Slachie talks about the intelligent cloud. It's not enterprise software and cloud.
It's one thing where benefits can slosh back and forth and empower everyone. And I think we're really the top company that has the both of those halves in a very strong position.
It is fascinating just seeing that the pace of change in the space. And it's hard outside looking into really fully appreciated. All we get is from consumers of the technology. We see the advances, right. We see small changes, the software is getting smarter, right.
But it's Hard for us to quantify how dramatic things are changing on the back end. Let's drill down in and Harsh can kind of weigh in here too on the architecture of the AI Supercomputer. I'm just trying to understand you have Azure and you can kind of build your own kind of infrastructure to run your models on Azure. But now you have this AI supercomputer that's different. So maybe walk through architecturally what's different about an AI supercomputer versus Azure and why you kind of think that this needs to be kind of optimized versus kind of building it myself through my own APIs on Azure itself.
So walk us through just the architecture of it.
David, do you mind if I take the initial crack at that one?
All yours. You are the architect.
So first of all, I think it's a mistake to view this as Azure and not Azure.
Okay.
Okay. There's a lot of common infrastructure across both sides. If you think but if you think about like what you want for an IaaS customer, Let's say a big enterprise IaaS customer. What you need is incredibly high reliability and availability. You want great stability.
You want predictable performance. You want access in many regions. It's really a planetary computer and people build their infrastructure on our computing utility and it has to stay up. And then now if you go to the AI side and you're training one of these large models, you're running a single program or job over a massive number of massive collection of machines. And it's okay for some of those nodes to go down, right, because you can build resilience into that large scale distributed system.
And we have a lot of experience doing that, for example, with platforms like Bing, which has to stay up and respond, but it doesn't matter if one node drops out. And so you're building a distributed system that needs to complete the job as cheaply and as fast Possible, but the reliability characteristics of the components are very can be different. You need, for example, a lot more networking to train these giant models because they're data intensive and the communication is global. So you'll provision much more aggressive cross data center networking than you need to for an IaaS customer. And obviously, there's a different mix of hardware of silicon components Because you're doing massive amounts of linear algebra.
But I do think that if you look at The high speed networking, the ability to communicate quickly, the power requirements that you need to drive this, so just raw performance, those are all things that accrue to the core Azure business as well. There's just an extreme point and then that business will follow along, will benefit from the technological advancements that we're making.
Sorry, I had two questions. Earlier, when I go back about 5 minutes ago, I think you mentioned or maybe David mentioned Transfer Learning. Is that effectively you would train this large data set And then you would give it to an enterprise data center to run or are we talking about something even smaller than that, like running it on a small chip at the edge? Or are we still talking about kind of a data center type process here?
David, why don't you take that one?
Both. So at the end so these massive models, of Of course, they are thought to be running the cloud. But at the same time, we see many scenarios where through optimizations, the inferencing of those models can be done at the edge. So they go side by side. So yes, absolutely.
Doug, anything that you add on top of that technically?
So yes, I think the way I View the transfer learning is, there's building your base, your model, which is super expensive. And then there's flavoring it with specific data. So just like an analogy might be, you know, everyone on the call here is speaking English, but we all have slightly different, you know, vocabularies and nuances and Choice of Words. And so if I train an English model, I could flavor it with some history from an individual to customize that model to the individual speech patterns. And it's a lot as David said earlier, it's a lot cheaper to do that flavoring than build the whole English model from scratch.
And that's the training thing. And then there's question is, do you deploy in the cloud, on the edge? And I think that's a separate question.
And that's just absolutely incredible. My second question was On the preferred sort of computational mode, is this are you is Microsoft a fan FPGAs versus GPUs or you simply don't care, the common knowledge is that ASICs are cheaper. Once you get those set, once you get the parameters down a little bit on what you want to accomplish, you always want to reduce the cost of computing down with ASICs. But I was just curious What flavor you guys prefer, Doug
or David?
As you might imagine, I get asked this a lot. And so I'm not going to say Microsoft believes in Let me give you some framing that I think will help people to understand the space. Okay. So if you think about what you're doing in AI For these training supercomputers, you're doing a massive amount of effectively linear algebra. And so you need a Outstanding system architecture for low latency communication, you need a robust software stack, you need resilience at the distributed system level like we said.
And then once you get down to the silicon, you've got a number of trade offs. You've got sort of The efficiency of the silicon, like how fast can it do X, just raw flops. You've got the future proofing, if the model shift, how well does the silicon map onto that shift. And then you've got other innovations that you might have in the hardware that help the model training, that might help it to converge, might like to save you cost. And so you've got this flat set of trade offs and We actually invested in multiple technologies.
We disclosed what the AI supercomputer is built on because that You know, PAR has been really, really great for training these large models. And then the second thing I would say I really don't like the term ASIC because it's In this space, it's kind of gotten overused meaning it just means custom chip, right? And in the past, you meant something very different, right? You're building a chip or a circuit for one purpose. And what these things really are is that they're programmable engines that do linear algebra flexibly.
And GPUs, FPGAs. We've talked about our engagement with Graphcore. There's a ton of startups and across cloud and edge, Intel. These are all programmable engines for AI based linear algebra. We have different sets of trade offs and the efficiency of the math units and the front end that feeds them.
And so there's just it's a new like just like we had CPUs for many, many years. This is kind of a new class of Programmable architecture that's emerging in a very rapidly evolving space. And so at the end of the day, GPs, FPGAs, what you called ASICs. They're all chips that have a lot of math units on them, different architectures to feed them.
Thank you.
What that ultimately will look like, I don't think we know or at least we don't talk about outside of the company.
We'll go back to that. We'll kind of try to get a different angle on that because that's a big question. But my question, just as we think about The industry pace of change, 10x, we're thinking technology changes around Moore's Law every 18 months, doubling transistors. But as you think about 10x every year, is there just a limited number of companies that can keep up with that pace of change. How nervous should investors be around things moving so fast that it's really going to be limited number of companies that can manage that pace of change.
Any thoughts, David, as you just think about guardrails that we kind of need to have in the industry and do we have efficient guardrails there?
Yes. I think this is counter Intuitive. But actually, technology like this, instead of limiting the access to AI to Your companies, it does the opposite. So it opens it to more companies. So let me elaborate on that.
So We don't expect, as I think we were mentioning before, companies to building these massive models from Scratch all the time. What we expect is the concept of AI as a platform, just like what we had in the past With then, of course, mobile and now with the cloud, what is the key value of that for AI? So if you look at how companies are developing AI today, They are I mean, their concept of a platform is I'm sorry, Doug, this is going to hurt, but it's limited in the sense that It is very close to having something very specific for my solution. So I have to build on top of the platform to have something customized for my solution, for my scenario. What we want to go is a place where AI can be Get accessible like a platform.
So you can use these AI models and you customize those models for your particular scenarios. That's the big change That I think we will see coming very soon, right? And this applies also. I know we're talking a lot about NLP, P, natural language processing, but this will apply to every other scenario. So again, coming back to the previous thing with autonomous systems, Very, very similar pattern that we see with autonomous systems.
So would you say that creating an autonomous systems is very limited to a few companies? Well, if you look at it from the angle of building all the components from scratch, like deep reinforcement learning, super complex technique That is not available for every enterprise to address. Our approach is very different. It's to provide a platform, extracting on top of those techniques like reinforcement learning, So companies can address, can embrace autonomous systems by using their expertise on what they have expertise, which is their business, Right. So the same comparison, it applies to many other different facets on MEI that we are pursuing across the company.
Got it. So it sounds like you're Similar to how you're moving from siloed to more centralized models available to all of the different internal departments at Microsoft, You're going to start to expose some of those same models externally on a kind of as a service basis. And so you almost create a new A pass layer for AI essentially, is that the right way to think
about it?
Exactly. Exactly.
And when what's the timing around that? Is that something that's going to be like a 2021, 2022 kind of opportunity?
Yes. It's a question and I
will go back to Doug's positioning on it. So this is not like getting to the finish line. So this is going to be so we will see value Being a believer in this journey, right? And the beginning was last week. So there's already a ton of value that companies can use today Because of that dual approach that we have in the company, which is, I would say there's 3 things.
So when you look at our technologies, I would position them in 3 different stages. So The first thing that we do is being very open about our innovation. So for example, we open sourced these frameworks, right? So that is, for example, last week, we open sourced The key framework, the key distributed framework engine is called DeepSpeed to enable this training, right? So that's something very core, right?
But then we go into a phase where we fuse these models to our products. And we already do that too. So you have a lot of features that are part of our products that customers Can get access today that are used in that manner. And then the third stage that I would highlight is how we also incorporate this into Azure For developers to also build on top of that, right? And we also have for that is like Azure Connected Services, which at the end are These AI models that are served as a service, right?
So they are very easy to use by developers. So with those three things, we cover the entire spectrum. And it's not that, hey, today, we are releasing everything, right? So it's that As we move and we make progress in this journey, things are getting infused into those 3 main channels.
Fascinating. My last question before we shift gears to kind
of just Hey, Brent, I'm sorry. Can I just one point on to that? Yes. It's important for your investors. If you imagine today that I have a I start a new company and I want to become a public cloud vendor.
I want to build another cloud business on my own infrastructure and compete with the big mega clouds. The capital I need to raise, to deploy, to have to build these giant data centers and have them be available around the world and in many, many regions and the number of software features that I need to Provision and the number of SaaS and PaaS services that I need to onboard. It's just not achievable for a new entrant into the space. And especially when you look at some of the advantages that some of the big players have and I referred to our enterprise software business as well. In the AI space, and this I think goes directly to your question, you need to be able to build The giant the biggest supercomputers that the world has built to train these models, you need to have really deep distributed systems expertise.
You need a really, really strong AI team that understands this. And so the Capital to do that as well and the level of expertise and depth is also, I think, not quite At the point that you would have to struggle to join it, start it in the public cloud, but it's very large and growing. And so just by definition, that's going to limit the number of players who are able to do this. And Microsoft positions that we want to empower all our Customers like David said, and give access to this technology. But the number of people, the number of organizations that can build that infrastructure from the ground up, I think is going to be small.
Totally makes sense and kind of leads into my next question around OpenAI. Maybe could you just walk through that OpenAI relationship? You put obviously a bunch of money there, a lot of really great talented engineers that are kind of aggregating at OpenAI. So maybe just walk through that Microsoft OpenAI relationship, what is it and how does it relate to the AI supercomputer announcement?
I Only say one I'll say 2 things and then I think David should weigh in. I want to be judicious about commenting about a close partner. So number 1, OpenAI is among the most ambitious organizations in terms of what they're trying to do with AI and their mission. And so they like our internal teams, push the infrastructure very hard. Okay.
I think and given the scope of their ambition, The fact that they chose to enter into this Microsoft partnership, I think is a very, very strong load of confidence for our infrastructure and our roadmap that I think you could look at and say, wow, this one of the most ambitious AI organizations on the planet was willing to sign this deal and commit to our infrastructure that should tell you something about our roadmap, which we don't publicize.
Very helpful. Anything else, David, you want to weigh in there?
Yes. I think it's coming back to Doug's points before, right, in the sense that In order to serve these massive models, you need to be just to throw a number in there. So the AI supercomputer that went on last week, it's actually it became immediately the number 5 supercomputer in the world, right? So that's Ascent of the scale that you need there. So to build a system like that and the other comparison that I do all the time is that you should look at this as the Formula 1, right?
But then all the innovation that you do in that Formula 1, you make it available to the typical cars that we use every day. So the same thing we're doing with this investment. So it's not only about creating this AI supercomputer to build this massive model that we believe will be the foundation for that AI platform that we're mentioning, which is only will be available For a few vendors who can really have the capital on that. But then the other thing is how we're taking all of that innovation and infusing it into Azure For everyday Azure, right? So that's the other big aspect to considering this.
That's right. And just to underscore that point, Brett, I'm sorry, I'll shut up in a Think about what David just said, 5th largest supercomputer in the world with the models growing 10x per year. Just think about the implications of that for a minute and where this is likely to lead.
Version 1 the longest supercomputer in the world. That's the trying to keep up with the modeling kind of appetite that's growing 10x a year, very, very fascinating. Harsh, you had a question?
No, I was I'll wait actually. I had a question on quantum computing, but I'll wait for you to finish.
Let's shift gears to kind of more of an enterprise reality. The enterprise reality is We're going through a pandemic. We have work from home. It feels like this is an environment where you could see things and AI product Sorry to pause and slow. So what are you seeing kind of in this post pandemic environment relative to kind of AI, the appetite for AI, is it slowing?
Is it accelerating? Just walk us through what you're seeing there?
You want
me to take this one, Doug?
Go ahead. Yes.
So please build on top of that. Yes. So there's I have to say that even before the pandemic, we were seeing a change in the conversation with enterprises. So I remember maybe 4, 5 years ago, every conversations that you have with a customer was about, hey, what is this AI thing? Should I be on top of that, right?
Then the next thing, then finally, Companies realize, okay, this is big, I need to embrace AI if I want to transform my business. But the next thing was how do I get started. And we saw a lot of adoption in the enterprise, but in the first phases of that adoption, right? So many pilots, many proof of concept, etcetera. From a time to now, we've been seeing that shifting in the market to really how do I have an impact in the business, How do I move beyond the proof of concept?
How do I connect AI with business outcomes? So that conversation was already happening. And this pandemic, this global crisis, what it has done on this conversation is accelerating it even more. So we see now customers Yes, cutting to the chase, telling us, hey, what I need now is putting AI into action and I need it now. And we usually have that conversation in 2 fronts, Right.
One is for the current crisis of the response space, how can I use AI to help me deal with the disruption that I'm experiencing today? But then even in the longer term, we know that we're never going back to the previous normal. It's going to be a new situation with a lot of economic uncertainty, With a lot of care on how do I optimize the revenue, how do My operation, etcetera, in my company. So how AI can help me there? So in the first one, let me maybe because I think your question was more on the first Right.
It's more today what our company is doing with AI, so how is AI helping in the disruption that they're experiencing. And we see 3 primary scenarios. So these are the 3 use cases that we actually saw a raise on the demand from our customers, and we see that across industries. So not for a particular vertical. The first one, which is very obvious is, of course, customer service.
So that is the most visible one, right? So companies that were in many cases piloting using AI to streamline their customer service, They are accelerating those projects like we see examples of companies that have been working for 18 months on a project. And then With the current crisis, they put it in production in 2 days, right? So that's the acceleration that we see in some cases. And that goes In many cases, it's definitely customer support.
So you had like a pepper storm with more requests from your customers, but at the same time, your operations are being impacted by the crisis. But in other cases, we're seeing broader scenarios like really being able to identify your Best customers, being proactive, personalizing your service to those customers. So we see that across a number of customers. And I'm happy later if we have time to go into some examples for Henriard Companies. But that's one scenario.
The other one that we see Hugely happening in this crisis, of course, business process optimization. And I'd like to separate that in 2, because we see that the first thing That customers are noticing is that processes that were very established in the past, now they have been disrupted dramatically. So think of supply chain, Right. Think of forecasting demand. So all fraud detection, customer churn, Those processes are dramatically different right now.
So they need to put in place processes that are more agile, that they are more flexible to those changes. And AI is a solution that is helping in that area. So that's one thing. The other thing, of course, is specifically Because of the economic situation, we see customers looking for cost savings, right? And that in business processes, that means a lot.
And we see many customers that are using AI to shorten their business process, to streamline business processes and to make them more productive. So that's the second scenario. And the third one that we see a lot because of another perfect storm right now is employee productivity. So that includes But it goes beyond that. So we see another perfect storm with very, very, very complex Situation that employees have to deal now from the business point of view, but there an impacted productivity because of the current situation.
So we see companies looking at AI to how they can, in the short term, increase that employee productivity, and we see many, many examples of those. So those are the 3. Doug, I don't know if you want to add something on top of that, but we can go deeper into examples if you want to later.
Yes, I thought that was super interesting. It was even interesting for me and I'm in the company. So thank you, David. I would just would underscore what David said. What we're seeing is that this crisis is accelerating companies' desire to do digital transformation, Partially because they need to and partially because if you're going to optimize a process that you deferred for a long time, you might as well just do it, do it right.
And Satya has invested enormously in having Microsoft be the try to be the lead company that will help people solve problems through digital transformation. And so I think it's an opportunity with our infrastructure and our services to really help our customers.
And as you think about the types of kind of AI use cases that seem to be kind of resonate most right now, you gave 3 examples. Is it kind of NLP, is it text, speech, computer vision, all of the above? Is there one type of AI that's really like, oh my god, It's taken off much faster than I would have thought.
David, go ahead.
I feel all of them, but if I have to pick a February right now, that's NLP. So NLP is right now in a very hot moment because of this acceleration being especially relevant To an area that was very tricky such as NLP. So NLP is very complex. It requires generalization. It is very, very complex To solve and with these new state of the art models, we see huge potential in that.
But in the examples that I said before, And he's there, but we see all the techniques of AI across the board.
David, I can vouch that it's been about 9 months since I have spoken to a live person without speaking to a machine first.
I think we all experienced that. Absolutely. Kent, we have about 50 minutes here left. What I think would be helpful is to maybe get an update on Project Brainwave. Obviously, Doug, like the last time we met, we kind of was about 3 years ago and did a kind of a deep dive, love to understand and maybe Harsh is probably better here to go down on this, but I'll turn it over to Harsh to get maybe an update on Project Brainwave.
Sure. Why don't David or Doug, why don't you give us a 32nd or a 1 minute overview on what you're trying to accomplish with Project Bain Wave and then I can jump into some questions.
Zach, do you want me to take this?
Okay. So we the project is we're continuing, it's continuing to go super well. We're using it, continue to use it at scale worldwide to serve the models that we train. Like I said, we have a we definitely have a mix of technologies in our infrastructure. And with that program in particular, we're able to do incorporate innovations fairly rapidly to really push what the models on the serving or inference side are capable of doing.
I think we have really focused on our internal businesses, because that's where the need is pressing. If you think about the need to serve these giant models and the Latency in the cost, the expense, that's where there a lot of innovation is required. I don't want to talk too much more about it, but we are going to be saying more publicly this year.
So, I hope that will
be a chance to get updated then.
And one term that we hear a lot and we obviously see this in Hollywood movies and Stuff like that as well is quantum computing. I was curious since you guys are some of the top leading experts in the country on AI. I was wondering is Quantum computing even applied AI in some manner or is it used for completely different things? And if so, how do you guys feel about The potential for that maybe commercializing in the next 5, 10, how many every years is if it's out that long?
David, do you mind if I start with this one? Yes, sure. Thank you. So there is certainly an overlap between the 2 in the research community looking at doing machine learning with quantum algorithms and quantum capable algorithms. What I would say right now, my personal view, I'm not speaking for the company here, is that initially the 2 technologies, AI, supercomputers and quantum systems.
We'll be targeting different applications like the things you can solve with a lot of the quantum algorithms are just very different problems than what we're doing with these large scale AI models. Eventually, there may be some convergence, But where Quantum, I think we'll do best initially is in problems that have huge compute Requirements, but not a lot of state, simply because of the difficulty of keeping large amounts of state in the superimposed domain. And if you look at the amount of state we're doing to
be
driving train these models, it's massive. So I think of them as Disruptive technologies that initially are attacking different classes of problems. And then theoretically, there is possible for Overlap in the longer term.
And Doug, when you say state, what exactly do you mean?
I'm not Information, bits, bytes, Right. How many bits of state do you hold on computers? Is it 1,000? Is it 10,000? It's not The petabytes of data we're operating on in the supercomputer space.
It sounds like we're far away is the best way to think about quantum computing is at least a couple of years away, if not more.
I would say I don't want to take a position on the quantum roadmap. I would say that for large scale AI, I think we're pretty far away in the quantum space.
Brent, do you have anything?
Yes. So let's shift gears a little bit maybe and talk about kind of some real word use cases. We've covered a lot of ground here. You gave us some scenarios how people are using it. But You just recently announced NDA, you announced all sorts of kind of new use cases.
I think I've built last week 2 weeks coming up to build around some new large enterprises, where AI was Microsoft AI, Azure AI. I think there's like 20,000 Enterprises that have deployed Azure AI at some point in the last year. But walk through how people are actually using it? Are there interesting Scenarios that you've come across that you'd like to highlight for us to take the technology in the real world scenarios.
This is a question for David. Yes, I can say that one.
I could use the frame that I was using before, but I could add one So the trend I was using before was those 3 key scenarios. So like I can work through some like canonical examples in there and even bring some I'm even bringing some of the customers, real customers that are in those three scenarios. But then I would add an additional one, Which is the post COVID-nineteen moment, right? So that's the moment we will see companies reimagining again in this completely new normal I will experience that and I can bring some examples of things in that area as well. So let's start with today.
Let's start with Customers that are using AI today for those three scenarios. So we started with customer service. Customer service is a very easy one because it's weird To find a customer that is not now using AI for their customer service and in general their customer engagement, right? So we have Many examples. I think the most exciting one that went out last week was actually in the healthcare area.
So we so in Microsoft, as you know, not only we provide the platform for customers to create, for example, bots to be used with customer service, but we also provide SaaS solutions that they can use directly for those solutions, for those scenarios. So one in particular is called the Azure Healthcare Bot Service and it's a vertical bot that health organizations can use today to for their customer service, in this case, their patient service. So what we have seen in the last month is And exponential usage of that service, so just to give you a sense, we have seen 1500 organization health organization between Public health organizations or health providers, etcetera, that are implementing new projects. So think of that, 1500 new projects that were put in production specifically for COVID-nineteen management. Those bots That are being deployed.
In just these few weeks, they have a reach of 30,000,000 people. So think of the scale of that, right? So And they provided a huge tool to really unload, to reduce some of the load that we were seeing in health professionals like doctors and nurses To have a first level interaction with patients because it was a self assessment tool that patients can take In order to be redirected then to the right resource, right. So that would be one example that I will bring. In business process optimization, we have seen, I don't know where to start there because, again, every company is doing a lot in that area.
So one, I think you mentioned that one, but one of the ones that we announced last week was FedEx, FedEx is the perfect example of business process optimization. So they have this massive amount of data. So they have a very granular data On their shipments in FedEx, so with this agreement, they are going to add AI on top of that, so they can have better intelligence on what is going on, so they can Not only identify trends that are happening or products that are happening, but also to optimize that supply Shane, right, in the organization. So really, really a perfect example of business process optimization. But it goes really everywhere.
I can think of India Lens It's another customer. India Lens is underwriting a credit platform in India for like 50 banks in India. What they did in their case is using AI for their credit approval system and they were able to decrease the time, The processing time, the internal processing time for credit approval, 50%, which in these times, imagine the impact of that is Having the ability to process twice the number of credits, right, in this in a moment like this, which is absolutely critical. Manufacturing, we have seen also many examples. I think I mentioned this one before, where we see companies starting to use autonomous systems, Not for the moonshot of autonomous driving, but things like motion control or process optimization in manufacturing that are getting big results From day 0, right?
And we see many customers in that front. What else can I bring in there? Well, You tell me business process optimization, actually, we saw it also in healthcare, where we have such a limited resourcing as we have right now. It is critical to things like medical supplies and even hospital beds are optimized with things like AI. So very, very typical.
The other one that I mentioned was employee productivity. So let me bring just a couple of examples in there. So one generic, we see that a lot, Is AI working with humans to augment their productivity? So just to give you an example of that, Reuters. So in the case of Reuters, you have Again, you have their journalists that are doing what they do, which is writing articles.
And in this case, they were using AI supporting them So they can attach relevant videos to those articles. And not only they didn't have to go through that very manual process of finding relevant videos, But they also by using AI, they also increased the average completion rate 80% with this technique, right? So Not only better employee productivity, but then better results. The other one that I would mention that is very connected to that is not only don't think of Employee productivity only removing like tedious tasks or repetitive tasks. It's also about making better decisions.
So the other example that I will bring in here is Team Rubicon. They are managing more than 100,000 volunteers in the U. S. For COVID-nineteen. So think about the massive Scale of the solution required for that.
And in their case, they are using AI to really identify and optimize the deployment of that volunteers across the U. S. So very important work that is really about making better decisions. So those were just a few examples on the first part, How companies are using AI in the response phase? But let me just use a couple of examples on the reimagine phase, right?
Because I think That is the big conversation that we should have, what is going on after this health crisis And it turns into an economic crisis, right? And we see 3 key things happening in there. So let me use some examples in there. The first one that we talk a lot today was AI.scale. So I know that we talk a lot about a scale in the concept of the model, But there's a bigger motion that will happen in the enterprise that is really applying AI at scale in their business.
So it's moving from that pilot phase to really infuse it into everything that they do. And for that, the key thing in there, I will bring an example right now, Is how you need to scale the usage of AI across your business units and move it beyond your technical units. So we see the business be more connected to the AI transformation that it was before. That is something that will happen Certainly, as we move into this space. And an example that I would bring in here, because the key thing I think the key lesson in here is that we have gone through that before With software development.
So remember this conversation 10 years ago, we were having it for software and we knew how to do that. It was called DevOps And it was all about bringing developers, so technical units and the business together in a combined life cycle. We don't have that for AI. So you have technical units that are working in silos in this silos, but they are not being infused into the business. So the key value for DevOps in AI is called MLOps, and we see that as a huge trend moving forward.
And an example that I will be hearing here That we just published is the Department of Transit in Vancouver. In their case, so think about this case, so we talk a lot about The size of the models, but what about the number of models that you need to transform an industry, to transform your company? In the case of this company, They have 18,000 models. So think of the number of models, think of managing those models. There's no way that you can do that With just a silo technical unit doing that, right?
So they brought together the business and the technical units with a common MLOps process That is across all of that. So that will be one. I know that I'm talking a lot here, but let me just go to the other 2 because I think it's important. The second one Is we
got 1 minute drill and then we
have I will do it in half a minute. Because the second one that I want to mention is how the next Step in this transformation is to empower the business. So we have talked a lot about technical units developing this AI, but the next step and we made Very big steps in Microsoft is to empower the business, so they can also apply AI, right, with things like the Power platform with Dynamics 365, We see that as a reality moving forward. And the one example that I will bring here is Novartis. So Novartis Pharmaceutical, as you know.
So we just like a year ago, we signed an agreement with them where they are basically empowering their 50,000 employees, so almost half of their employees with AI, but don't think of it as using AI, but more as applying AI With flexibility, with freedom to their processes. So you can have researchers that are researching new drugs or new treatments or new vaccines and how they can Be augmented with AI by using in their processes to drug manufacturers and any Saudi market experts. So that's a huge Motion that we will see moving forward. And the third one, and this would be my last one, is let's not forget about responsible AI. So we see Responsible AI, we had this conversation for many years already, and we're seeing also a huge shift in the conversation from, What are the challenges of AI?
What are your principles of Microsoft? To now AI under a huge risk as I accelerate my adoption of AI, Help me to implement that AI responsible so I can mitigate those risks. So we have many examples in here, but just to read one, TD Bank, the bank in the U. S. From Canada, you can see how they're using responsible AI to mitigate things like bias Or things like adding more transparency to the models that they are deploying.
Thank you, David. I just
You guys have so much to talk about AI. Jonathan has extended this conversation for another hour. I'm just kidding. Last Question for me, Doug, as you think about the pace of change here, tax in the last year on the modeling side. What are you most excited about thinking about the AI industry and Microsoft's opportunity in the next 3 years?
One thing, what's the one thing you're most excited about?
Are you thinking that?
David, I need to give that some thought.
I keep going back to the concept of really putting AI into action. So the concept of democratizing AI, I know that it's not so technical, but there's a lot going on to empower that motion that we're doing Because it goes beyond just the infrastructure of the software, it's really about our comprehensive solution from research to the 3 clouds To our tools, right? So really, really powerful if we get there in the next year.
So driving those business outcomes for sure for you. Yes.
I'm going to give a little bit of a squishy answer, but it's a true answer. You very rarely get the opportunity and this is Personal view to work on things that will really change the world in a positive way. And I think The capabilities we're going to be able to generate just from my perspective with the hardware and the systems work Can help really solve global problems. I mean, that's like that is a once in a lifetime opportunity to build something that allows us to really make meaningful shifts on personalized medicine, climate change, efficiency, Security, coming back with the concept of responsibility. So for me, I just really feel driven by a mission and a purpose To really move the needle and make the world a better place is a really it's a blessing.
Boy, Doug, that was So nice on you. You should have said at the beginning, so I changed my answer.
Well, listen, we're out of time. Really appreciate David and Doug sharing your views on the current state of AI here. As always, it's a pleasure And thank you so much for sharing with us. Thank you.
Thanks for the opportunity. I'd like
to chat with you all. And David, great job. I should tag team with you more often.
It was a pleasure as always. Bye.
Carol. Bye bye.