Microsoft Corporation (MSFT)
NASDAQ: MSFT · Real-Time Price · USD
413.14
-11.32 (-2.67%)
Apr 30, 2026, 9:34 AM EDT - Market open
← View all transcripts

Goldman Sachs Communacopia + Technology Conference

Sep 10, 2024

Moderator

The first time ever at an investor conference, a Chief Technology Officer of the world's, I think, still the most valuable company, and I'll let Kevin jump into his intro in a second, but I have to share with you that a little-known piece of trivia. If we're talking about generative AI in such a big way, and if this thing has become such a mainstream thing, it was a magical meeting that was perhaps facilitated between Sam Altman and Satya Nadella by none other than my guest here. We owe our interest in generative AI and how it's become such a mainstream thing to Kevin. Kevin would-

Kevin Scott
CTO, Microsoft

Certainly not entirely.

Moderator

In some part. Tell us a little bit about your background and how you got to be the CTO of Microsoft, and we have some questions to jump into.

Kevin Scott
CTO, Microsoft

Yeah, it's sort of a circuitous journey, but like, the short story is I grew up in the '70s and '80s in rural central Virginia. Like, you know, I was super lucky to come of age when the personal computing revolution was happening. Like, I studied computer science. I thought I was gonna be a college professor for a really long time, and then I left academia. I dropped out of my PhD program and joined this startup called Google in about a year before their IPO, and yeah, I did a whole bunch of different things at Google.

It was the place where I first built AI systems, so I did a bunch of stuff on both the search and the advertising side, like building large-scale machine learning systems. And, like, for a while, ran the ads quality system, so, like, this big machine learning system that did CTR prediction for advertising, which was the thing that made the ad auction actually work.

Moderator

I believe it's a small business these days.

Kevin Scott
CTO, Microsoft

Yeah, yeah, small business.

And then I helped build this advertising company called AdMob. I left Google to do a startup, which got acquired by Google three years later. I was at Google for a while, and then I went to LinkedIn. Like, I ran engineering and operations at LinkedIn, helped take the company public, and then I joined Microsoft, when Satya acquired LinkedIn.

Moderator

That's great, and you ended up as CTO of Microsoft. That's a very unlikely.

Kevin Scott
CTO, Microsoft

Quite a story. Quite a story.

Moderator

Unlikely.

Kevin Scott
CTO, Microsoft

Yeah, yeah.

Moderator

For someone who considered a PhD in English literature at some point in your life-

Kevin Scott
CTO, Microsoft

Right.

Moderator

This is quite, quite a fascinating thing. Kevin, can you share with us your view of where we are with generative AI today? How do you see it evolving over time, and is there at all a way to think about how this builds upon old AI that you developed at Google? If that's even a way to think about building one on top of the other.

Kevin Scott
CTO, Microsoft

Yeah, so I think we're still in relatively early innings. You know, the interesting thing that's happened over the past, you know, decade in particular is, AI systems have started behaving more like proper platforms. So, like, the first AI systems that I built were relatively narrow. So, like, if you wanted to solve a problem, like how do you calculate the effective CPM for an ad so you can rank them in an auction? You know, you have a very narrow set of data about, the ads and how people are clicking on them, and you use a relatively simple algorithm that's running at really big scale, and, like, you build a model, and you run a bunch of experiments, and it's sort of a closed feedback loop.

And the model gets better and better and better over time, but only at the narrow thing that you trained it to do. And, you know, if you wanna do a lot of machine learning, like, in the past, like, you had to have a whole bunch of different teams, like, running a bunch of those, like, little vertical loops. And I think the thing that has really started to accelerate over the past handful of years is, like, you have with generative AI, these frontier models that, really are quite useful for, a huge variety of, different tasks and applications and product contexts. Which means that you can think about them as a platform, like a database or like any other piece of infrastructure that you use to build things.

It doesn't mean that you have zero work to do. Like, there's still a non-trivial amount of work that you have to do to, you know, go make something valuable out of this bag of capabilities that is modern generative AI. And I still think, to a certain extent, the hardest part about all of this is, like, just having classic product sensibility. Like, understanding, like, what a customer need is, and, like, what problem you're really solving, and then just attending to that with hair on fire urgency to, you know, try to do something valuable for somebody else.

But the problem that you don't have right now is, you know, the one that I had when I wrote my first machine learning program, which is I had to sit down and spend six months reading a bunch of papers and writing a bunch of really complicated code, just so I could do a relatively basic thing. And that thing that was my first ML program in 2004, so depressingly long time ago, like a high school kid could do it in, like, two hours on a Saturday.

Moderator

Oh, boy.

Kevin Scott
CTO, Microsoft

I mean, it's just sort of shocking, like, how much capability is now in the hands of people who wanna create new things with AI. And so, like, I think that's where we're at. And, like, the thing not to lose sight of, although, like, I really would encourage folks not to get too swept up in hype. But there is a very real thing happening with these AI systems, where they're becoming, like, very much more powerful over time. And, like, I've said this, you know, a bunch of times, you know, in public, and I say it all the time internally, like, we are demonstrably not at the point of diminishing marginal returns on how capable these AI systems can get.

I think Dario was on stage, you know, right before us. Like, I think all of the people who are sitting on the frontier, like, evolving this particular new infrastructure can really see that, like, there's still a lot of power yet to be built, and a lot of capability to be put into the hands of developers, where, you know, five years from now, ten years from now, like, somebody will be sitting on stage, you know, telling some story about, you know, the impossible thing that they couldn't do in 2024 that now a high school kid can do in a Saturday afternoon. Like, I like that I'm sure of.

Moderator

That's where you see that things that we take for very complicated, abilities become table stakes?

Kevin Scott
CTO, Microsoft

Yeah, and it is, you know, the. I mean, I know you all are investors, like I usually say this to, like, product makers. The thing that's going to be really interesting over the next handful of years is, like, seeing the companies and the entrepreneurs and the creative people sort of prospecting at that boundary of, hard to, or, like, impossible to hard.

I think in every platform shift that you get, whether it's the PC revolution, the internet revolution, the mobile revolution, the first things that happen is, like, you get this amazing new bag of capabilities, and people go off and, like, build trivial things, and then they very quickly realize that the things that have enduring value are, like, the things that are, like, just on the verge of impossible. Like, they're ridiculously hard to do, but at least they're not impossible anymore, and, like, that's where the really interesting stuff lives, and, like, you can see it model generation by model generation, like, things phase shifting from impossible to just hard, and, like, that's a good spot to focus your attention.

Moderator

Got it. You've lived through a few tech cycles. So how do you compare and contrast this AI cycle that we're going through to internet, mobile, cloud?

Kevin Scott
CTO, Microsoft

Yeah, I think there are a lot of similarities to all of these big revolutions. Like, it's sort of catalyzed by infrastructure. Like, you've got a new thing that makes a whole bunch of things, you know, possible that were, you know, impossible or extremely difficult or costly before. Because, like, it's the, they are all platform revolutions, like, they, they're not narrow, so it's not a thing that one company is doing, and it's like, okay, like I've got some secret stash of infrastructure that only I have, and, like, only I can, like, imagine, like, what the possibilities of the infrastructure are.

So, like, we just, in parallel, like, we have a huge community of people being inspired in a bunch of different ways by what the infrastructure is gonna allow them to do, which I think is really, you know, interesting and exciting and invigorating. Like, the thing that, you know, makes me happiest about, like, being in the spot that I'm in, is, like, seeing what creative people are gonna choose to do with this. It also, you know, interestingly, I think all of these things have changed the way that software development happens. It not only opens up new possibilities for what software can be developed, it also changes the means of production of software itself.

So if you think about all of those previous revolutions, like, you get a brand-new tool set, and all of a sudden, a type of software gets easier to write. And, like, you're just sort of excited as a software developer that, like, "Oh, my God, like, now I've got this thing, and, like, it just... Like, all of this stuff that irritated me before is, like, easier now.

And so, like, those two things, you know, constructively interfere with one another. So, like, you're off chasing new ideas, but, like, you're chasing it with a tool set that's made you more productive than you were before. And so, like, that, that's truly an exciting moment to be in. And, like, you know, we'll sort of see over the, you know, the coming years. These are things that are very hard to predict, but all of this may be happening faster than what we saw in the previous revolutions. But, like, you know, one thing that's relatively certain, like, if you sort of believe that we're in a big platform shift, you know, trillion-dollar companies that are brand new are getting created right now.

Like, usually, like, the folks who move early, like, latch on to the possibility, like, get into that learning loop where they are, you know, experimenting and learning and, you know, understanding, you know, like, what the valuable is versus what the trivial is, like, are the ones who, like, have real advantages in, you know, building those durable, super valuable things.

Moderator

Got it. If this question sounds very intelligent, it is because Marco Argenti, our CIO, asked me to ask this question of you. I wish he'd been here sitting on stage with you, but he has another commitment. It goes like this: We've seen exponential improvements in LLM models so far.

Kevin Scott
CTO, Microsoft

Yep.

Moderator

There's a race for attributes, parameters, modes, and data size. Is the rate of change slowing down? Is this generation of models the path to AGI, or do we need a fundamentally different evolution of the transformer architecture to continue making progress towards that goal? So clearly, that question did not come from me. Marco, thank you, in case you read the transcript of this.

Kevin Scott
CTO, Microsoft

Yeah, look, I mean, again, you all will have to be the judge of this over the coming weeks or months, but, like, you know, I think there's some super exciting things that are coming in the, you know, last half of this year, that, you know, lots of folks have been working super hard on. I don't see, you know, just I've said over and over again, like, I don't think we're at the point of diminishing marginal returns on the current paradigm. Like, I think we have a pretty clear path to, like, building infrastructure that is, like, stays on the same, you know, path of performance gains that we've seen. And in, like, on multiple dimensions.

So, like, it's capability, it's, you know, it's cost, it's like, you know, sort of power performance of systems. It's like, you know, just sort of a bunch of things, and, like, an entire ecosystem of really smart people, like, tackling all of the different parts at all the layers of the stack, just trying to improve things. I mean, that said, like, you're, you are not wrong to suggest that there probably are some disruptive innovations that will change things again. Like, we should hope for it. Like, I hope the transformer is not the last interesting discovery in-

Moderator

Oh, really? Okay.

Kevin Scott
CTO, Microsoft

In ML architectures. Like, I hope not. And, like, you know, we even have a proof point. Like, we all have a, you know, a twenty-watt AGI machine sitting between our ears, which is dramatically more efficient than, like, the things that we're building right now. So, like, you should hope that, you know, we make some discoveries over the intervening years that bridge the gap between what we're doing and what biology knows how to do. But, like, the things are not independent. Like, we're not, at least from what I can see, like, we're not at the point where we're about to stall on progress of the, you know, increasing the capability of the infrastructure because, like, we don't have the next idea for, you know, what needs to be done to make it more powerful.

Moderator

Got it. There's been a lot of talk about small language models versus large language models and the role and the relative positioning of these two. How do you shake out on this SLM versus LLM? And a follow-up question to that, I wanted to ask about open source. Where does open source fit into all this as well?

Kevin Scott
CTO, Microsoft

Yeah, like, I mean, like, we can sort of, like, start with the fundamental principle that I think answers them both. Like, I'm pro-developer. Like, I think, you know, do whatever you need to do to, like, get the interesting application written, you know, so that you can do something someone cares about. Like, being dogmatic about what tool you use to solve a problem is kinda nuts, and like, you can practice a bunch of wishful thinking about, like, what you would like developers to do, but I'm sure you all know developers.

Like, they're the orneriest people on the planet, like, highly opinionated, and, like, they're gonna experiment with a bunch of different things, and, like, they will choose the tool that makes the most sense for them to solve the problem that they're trying to solve. So in that sense, like, you know, you even look at Microsoft developers building Copilots. Like, the way that a Copilot is architected is that you are typically using a frontier model to do the bulk of the interesting work there. But, like, you also use a bunch of smaller models in parallel.

Like, you have a you know, fairly robust orchestration layer that decides how to route requests to which model to, you know, let you achieve the performance that you need on a bunch of different dimensions for the application you're trying to write. Like, sometimes, like, you need, like, to send something to a small model because you're not gonna get a better answer from the large model, and it's just much cheaper or much faster to, like, make the inference request to the small thing. Like, sometimes, like, you're trying to do something on device locally, like, you can't afford a network call into the cloud to invoke a large model.

And so, like, I think having that flexibility to architect the actual AI applications using, you know, using the right mix of models, is, like, an important thing for developers to be able to have. But, like, the large models are very important, and, like, they are the things that are... I mean, they sit on the frontier. And so, you know, again, if you are looking at building the most ambitious thing possible, like, you, I think, need to have one of them in your portfolio of tools, so that you can be attempting only the things that they enable.

But it's not a dichotomy, like, not an either/or. Like, same thing with open source. L ike, I think open source is, like, just making tremendous progress. Like, it's super encouraging, like, as a developer, I think, to see how fast the open source community is building really interesting things. And, like, you know, there are a bunch of people at Microsoft and Microsoft Research inside of my team who are, you know, building things like Phi, which is, like, a really capable SLM. And you know, it's open source, you know, for folks to use as they choose. And you know, again, with developers, like, you know, they just want choice. Like, they wanna be able to experiment. Like, they want to. They don't wanna be told, you know, what their toolset is. They wanna be able to experiment and choose.

Moderator

Got it. So at Goldman, we have this acronym, IPA, infrastructure build out then platforms and applications. That's where we've seen other computing cycles more or less evolve. Do you see that as a similar way in which generative AI progresses, or am I hallucinating?

Kevin Scott
CTO, Microsoft

Yeah, look, I think so, what I'm seeing right now, and this is the place where it is like a really unusual revolution, so like, you definitely have like these things that are sort of independent of one another, execution-wise, so like, there's a bunch of infrastructure stuff that you need to go, you know, like pour concrete, like sign leases, like get power in place, you know, solve a bunch of electrical engineering problems, solve a bunch of cooling problems, like get the right silicon in the, you know, right places, design the network fabrics. And, you know, all of these things operate on different timelines.

And then you have the software stack that sits on top of that, like your low-level systems layer, and then you have, like your middleware and applications, stacks that sit on top of that. Things are moving so fast right now that it is kind of hard to imagine a world where you, like, go do the infrastructure build-out, and you wait until it's done, until you make, you know, the substantial decisions and deployments on the next layer up. So all of this stuff is, you know, really feeding into each other in a way that I haven't quite seen before.

Like, where you are just making big decisions on things that really want to move super slow, but where you have to make them move fast because, like, the technology is just sort of evolving at such an incredible pace. It's really, really interesting. Like, you know, I will say, like, I think you guys have Jensen coming on later, like-

Moderator

Tomorrow morning, yeah.

Kevin Scott
CTO, Microsoft

Yeah. I mean, yeah, everybody in the ecosystem is moving materially faster right now than they were three or four years ago. Materially faster.

Moderator

Is it because the science and the technology is moving rapidly, or is it... what is driving this?

Kevin Scott
CTO, Microsoft

Look, I think.

Moderator

Never seen anything.

Kevin Scott
CTO, Microsoft

I think it's a feedback loop. I think you've got a bunch of really smart people who can respond very well to urgency. And like, you know, the place that we're in right now with infrastructure is, you know, people ask all the time, like: "Are you building too much or building too little? And so far, it's, you know.

Moderator

That's what they want to know. Are we building too fast, too quickly? How much?

Kevin Scott
CTO, Microsoft

Yeah, like.

Moderator

And.

Kevin Scott
CTO, Microsoft

Dude, I want to know it, too. But, like, so far, demand for the infrastructure has materially outpaced our ability to supply it. And so, like, we are building at a pace where, you know, based on our understanding of where, you know, demand is gonna be, like, we're trying to sort of intercept things that, like, like I just said, like, there are a bunch of slow-moving things in the equation that, like, we've just really had to try to make move much, much faster than they were moving before. Like, the thing that I will say, like, I think the whole ecosystem is responding incredibly well. Do I wish it were faster? Yes, I wish it were faster, but, like, thank God it's so much faster than it was, like, four years ago, because we would really be in a pickle then.

Moderator

Got it. Generally, I want to get your views on compute costs. Generally, with tech cycles, things, the underlying inputs become cheaper. You get mass market, standardization, et cetera. How important, given the high cost of compute here, how important do you think it is to bring down compute costs? And if you think it is, what are the developments that might support that view?

Kevin Scott
CTO, Microsoft

Yeah, so it's super important always to bring down the cost of compute. Like, one of the, you know, the things that has supported all of these platform revolutions that we talked about, personal computing, internet, smartphone, cloud, all of them, has been this ability from silicon to networks to, like, the low-level software layers that empower the layers running on top of them to get exponentially cheaper over time. And I think we are definitely seeing that, like, you know, the... I don't know exactly what the number is right now, but, you know, back in May, when I was giving my keynote at Build, yeah, the anecdote that we gave the audience was, at that point in time, GPT-4 had gotten 12 times cheaper-

Like, per token to do inference on, than at launch time. And so part of that is because the hardware had gotten a lot better. And like, part of it is because the infrastructure had been tuned within an inch of its life. Everything from, you know, numeric kernels, where people are writing the moral equivalent of a bunch of assembly code to, like, extract every o unce of performance out of the chips that you've got. And then, like, just foundational improvements to the algorithms that are running on top of the hardware layer. And I think this is just gonna continue over time. So, you know, like, the important thing to realize is things are already, like, getting, you know, on a price-performance basis, like, way cheaper than they were.

And, like, there's super good stuff coming, like from, you know, hardware to system software to algorithms that should keep that trend moving. And, like, we just got to keep pushing super hard on it because, like, if you really, really want.

Moderator

Yes.

Kevin Scott
CTO, Microsoft

All of this stuff to be ubiquitous, like you need it to be as cheap as possible, so everyone can use as much of it as makes sense. Kevin, you piqued my interest by saying, "Super good stuff coming." So, to the extent that you can share with it, what is the high-level super, conceptually, maybe the things that are giving you that conviction? Unfortunately, very little that I can. Okay, that's all right. I don't wanna create problems or. Yeah. Yeah. We were off. We just take that as for granted. Yeah, if we were off the record. It's kind of like I could share, share. But so yeah, look, I think the thing that ought to give everyone...

Like, we'll have things that are coming shortly that'll be super visible, that I think, yeah, will be very encouraging for people looking for signs of progress. But, like, you can just even, you know, even looking at what's happening on a week-by-week basis is, you know, like, all of this competition is happening where, you know, Meta's doing great stuff with Llama, and Anthropic is doing super good stuff, and Google is doing. I mean, so yeah, you know, there are these objective benchmarks for how everyone is performing and, like, you know, people, you know, because of competition and because the, yeah, the science and the, you know, sort of engineering is happening at such an incredible pace, like, you know, just every week, things are getting better. Mm.

And, like, the point that I've, you know, been trying to make for a while to all of the folks inside of Microsoft is, like, there is a weird nature of how the frontier progresses, which is, like, you go build gigantic supercomputing environments, which are like big capital projects that take a very long time to build, and then you put them in the hands of people who are gonna train a frontier model on them. And then, yeah, they optimize their workload on that environment, and then they do this extraordinary work, and then, like, you get a new frontier model.

And, like, because of the nature of the way all of this unrolls, is like you're just applying dramatically more compute to the, you know, to the problem, and it just happens in these, like, increments because of the way that you're building all of the systems. And so, yeah, the thing that people forget sometimes is, like, between the updates, like, you can get into this mode where you've convinced yourself, "Well, you know, like, progress is only linear. Like, you know, this benchmark only got this much better." Yeah. And you sort of forget that, you know, like, you look at our partner, OpenAI, like, what the big updates have been, like the jump from GPT-2 to 3 and from 3 to 4.

And, you know, like, I can't say anything about, like, what's next and when, but, like, it, it's not like work stopped after GPT-4. So you know, the thing that we have seen for the past six years with generative AI is, like, every couple of years or so, just because of the lockstep way that all of this stuff gets built, like, you get a major new update to the frontier. Mm-hmm. Mm. So Brett and Kendra, we'd love to, when he's ready to officially announce the good stuff, we'll love to host you back at a Goldman Sachs AI Symposium. Just putting it out there. Always putting a plug for the firm. How dependent is your AI strategy on OpenAI? Because you also have your internal AI with a CEO of AI.

Yeah, I think OpenAI, like, by any objective measure, has been one of the most consequential partnerships Microsoft has ever had. And, you know, like, we're a company that's had a lot of consequential partners. Mm. So, like, we're super, super grateful, and, like, you know, I think we've been able to do things in a pretty extraordinary way just because it's, like, two really capable companies, like, trying to, you know, push a big platform shift forward, rather than, like, one trying to do everything. Mm-hmm. So like, I, you know, like, we don't even think about it in, you know, to say, like, we're sort of super dependent and, like, there's. It's like, it's a complicated bag of problems that we're collectively trying to solve.

And just like with the PC revolution, where you had, you know, Microsoft and Intel and, like, a whole bunch of OEMs, like, you know, doing this. I mean, you just sort of think about, like, this is before my time at Microsoft. I've only been there for, like, a little under eight years now. The mission of the company, since, like, at the point where it was founded, was to put a personal computer on every desk and in every house, and, like, that's at the time where people didn't even know what a personal computer was. And so through that partnership, like, the entire ecosystem was able to deliver that, that mission, which is, like, just completely audacious.

I think this is another, you know, mission, like, you know, really unlocking the full potential of AI to create value for people everywhere is another, like, equally large thing. Like, I just don't think it gets done by, like, one entity. Mm-hmm. It's a lot of people working very hard in the same direction. And hence, that's why you have your own AI CEO internally, and then you have. We do, like, we, I mean, Microsoft has had, like, AI researchers, you know, working on AI since the 1990s. Mm. Like, we were working on artificial intelligence when I was an intern at Microsoft Research in 2001. You were an intern at Microsoft Research? Yeah. Dude, yeah. Microsoft Research reports to me now, and, twenty-three years ago, I was an intern at Microsoft Research.

Moderator

Any intern at Goldman Sachs, just take that as an inspiration that.

Kevin Scott
CTO, Microsoft

But, you know, like, so there's a lot of AI that Microsoft is doing, that is, like, very complementary to what OpenAI is doing. And, you know, we were doing it before. It's gonna continue, you know, for the foreseeable future because, like, it's a really large surface area. There are a lot of problems that need solving.

Moderator

Good to know that. Good to know that. This, again, I'll preamble just saying this is a Marco Argenti question, so it's gonna sound very erudite, right, Belinda? We seem to be moving chatbots to agents very quickly. What's the vision with regards to AI performing more and more complex, long-running tasks? Do we see a future where AI-powered agents will be able to perform tasks that require planning, decision-making, and execution across multiple environments and systems? This man, what a beautiful question. It's like poetry, right? That's why I had to give credit to Marco.

Kevin Scott
CTO, Microsoft

So, the.

Moderator

Nobody can.

Kevin Scott
CTO, Microsoft

The answer to the question is yes, and I guess why do I believe that? Yeah, so look, I think one is, like, it's just necessity. So in order for AI systems to be fully useful, like, they do need to be able to do longer-term planning. They need to have memory. They need to be able to actuate, like, products and services and interfaces on behalf of users. And I think there's a bunch of good work that's been happening, you know, on everything from orchestration layers, where, like, you're giving developers, like, really good frameworks for figuring out how to uplift the basic capabilities of models to, like, help them do more of, you know, this sort of long range, like, multi-step task. And then the models themselves, I think, are getting, like, more capable of synthesizing plans on their own.

I mean, you can even see a little bit of this, like, if you go to ChatGPT right now, and you ask it to, like, give you a plan for something. Like, it can articulate like, pretty comprehensive plans for very complicated things. And so, like, the next thing that you would want after that is for the agent to be able to say, like, "Okay," like, "I see the plan. Go do that. And I think that's.

Moderator

That's what's next.

Kevin Scott
CTO, Microsoft

Yeah, look, I think, you know, lots of people are working on filling out that hole in the capability of these systems. So yeah, I think lots of good stuff coming on that front.

Moderator

Got it. We have two minutes. I don't have any more questions. Is there a question that you want to create yourself an answer for yourself?

Kevin Scott
CTO, Microsoft

Oh, God.

Moderator

You're a prompt engineer, right? I mean, so.

Kevin Scott
CTO, Microsoft

Yeah. I don't know. So I think the thing that we've seen over and over again with building hard things is you wanna strike the right point between aggressive pessimism and aggressive optimism. Like, you just don't wanna get yourself caught up in hype in either direction. So the thing that we like inside of Microsoft is, like, we're trying to do very hard things with these very complicated models, is you want teams to be as ambitious as humanly possible in how they're putting this stuff to work. Like, you really want to find the things that are l ike, just went from impossible to hard.

Like, you probably don't want to, like, spend a whole bunch of your energy doing a bunch of incremental things because, like, optimizing an incremental thing when, like, the underlying model infrastructure is getting more and more powerful so quickly, like, probably means that the model is gonna be able to do a whole bunch of this incremental stuff. And, like, this was the lesson we learned years ago, like, in the very, very early days of our partnership with OpenAI. Like, I would have teams inside of Microsoft that would, like, take GPT-3, and they would go build this fine-tune, like, super optimized thing, and it was like, you know, 3% better than it on some benchmark and, like, a little bit cheaper. And they'd be like: "Yay, victory!" And then GPT-4 came, and it's like, "Ah, crap l ike, what do we do now?

Moderator

Get better, yeah.

Kevin Scott
CTO, Microsoft

Yeah, so you just—you wanna be on the frontier with your ambitions. Like, it's a good spot to be.

Moderator

That's great. We are right at the top of our allocated time. With that, on that note, thank you so much.

Kevin Scott
CTO, Microsoft

Yeah, thank you.

Moderator

For giving us your perspective.

Powered by