Intel Corporation (INTC)
NASDAQ: INTC · Real-Time Price · USD
83.33
+0.79 (0.96%)
Apr 27, 2026, 12:27 PM EDT - Market open
← View all transcripts

New Street Research 3rd Annual Semiconductor Big Idea Virtual Conference

Sep 14, 2023

Moderator

Well, thank you all again. I hope everyone's enjoying the conference. We are joined today by Arun Subramaniyan of Intel and Suchi Srinivasan, who will sit on a panel to kind of discuss the emergence of GenAI in the enterprise, AI use cases, how enterprises are getting ready, et cetera. So very quickly to introduce them, Arun joined Intel to lead their cloud and AI strategy team. Prior to that, he was at AWS, where he led global solutions for machine learning, quantum computing, high-performance compute, autonomous vehicles, and autonomous computing. While he was there, his team was responsible for developing solutions across his full portfolio. And Arun grew the business by 2-3x over that period.

Suchi Srinivasan, a friend and colleague, she's currently an MDP, a managing director and partner, at BCG, where she co-leads the enterprise GenAI solutions vertical. Prior to joining BCG, she has more than two decades of experience in the tech industry, building ecosystems, delivering solutions, and bringing new products to market, spanning stints at Microsoft, McKinsey, and Dell. So thanks to both and Pierre, of course, joining as well. Needs no introduction. Pierre is the managing partner of New Street Research and a co-host for the conference. So thank you all for joining. Maybe I'll kick us off. We've gotten some great questions. So maybe I'll direct this one at Suchi and then Arun, please feel free to add your thoughts as well.

Can you just walk us through how, you know, AI, both traditional and generative AI, is being leveraged today in enterprise? You know, what are you seeing in terms of value creation, value capture, and what's the state of play?

Suchi Srinivasan
Managing Director and Partner, BCG

Yeah, great question. So to kick us off, I think AI is not new to enterprises. It's been around now for quite a few years, and has been used traditionally in situations like optimization, forecasting, you know, scenarios like that to get more predictive powers. I think what's different about this wave of AI with deep learning and other kinds of techniques is it's now being able to be applied across a vast variety of operational efficiency use cases is sort of the first wave of this. You know, being able to bring from the new capabilities of generative AI, which is logic, which is reasoning, and apply it to a vast variety of knowledge worker workflows to get to those operational efficiencies, is where we currently see most of the application.

There is therefore, because of the new economics of these types of technologies, there is a vast potential to be able to apply it in more disruptive and innovative ways and more customer-facing ways. And I think we'll continue to see that as this technology matures. So there's a vast potential for value creation now at scale. Arun, any thoughts?

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Hi. First of all, thank you so much for having us here, Ramiro and Pierre, and thanks to everyone for joining as well. I think the fundamental change we're witnessing between traditional AI, which, as Suchi mentioned, is nothing new, it's been going on for decades, and what we're seeing newly with generative AI, is really the empowerment of a large set of practitioners who can all of a sudden take advantage of very advanced AI in their day-to-day work. And not just take advantage, but then do that in a way that's fundamentally changing how fast they can go apply in their own fields that they are experts of. And that's the significant shift that we are sitting on the cusp of, and that's really where a significant potential for the new age of AI really is coming to bear.

Moderator

Got it. So maybe when you look inward, Arun, towards the, you know, the semiconductor industry in particular, how do you see, you know, AI changing the way that firms, you know, design, build and sell chips?

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Absolutely. So, I mean, again, the semiconductor industry is no stranger to AI. So we've been using AI in every aspect of our design, builds, deploy and sell aspects as well. But it's coming from the traditional world of AI, where you're using it either for chip design or using it to figure out how to get your placements right, and then going into activating verification and validation processes in your design. And then build process, of course, is a massive advantage with manufacturing. But, once again, the significant shift that we are starting to see is the speed with which these technologies can get adopted and the activation of end user teams, rather than having, say, armies of PhDs working right next to every single project that needs to get deployed. That's really the biggest shift.

Here, instead of what I would call an army of PhDs that are doing data science for a living, you actually start having a generative AI model that almost acts like a copilot to a domain expert that works alongside them, so that the expert can guide the models both in creation as well as trying to figure out if the model is hallucinating or not, or if it's trying to go off track, but also use it in a way that is fundamentally accelerating all of these processes.

So in that sense, semiconductor industry is not anything different from all the other manufacturing industries at large, but the closeness of activating these models and the ability to do all of these things because of the availability of new age hardware, and then using the same AI to go design the hardware to be better, I think that's just a massive adoption waiting to happen.

Suchi Srinivasan
Managing Director and Partner, BCG

... Yeah, maybe one other comment that I might add over here is, you know, a lot has been said about the application of generative AI to sort of the creative industry. So generating marketing copy, a first draft of, you know, any kind of written material and sort of overcoming writer's block, if you will. We see a version of that being very helpful here in across several of the use cases that Arun alluded to in the semiconductor industry as well, right? Whether it is pattern recognition for defects in validation and testing across, you know, vast amounts of historical materials across time and type of designs. You know, whether it is actually coming up with in a sort of extreme version, a design, you know, a specification.

The first draft, if you will, of what is produced, we're consistently finding metrics in other client situations that you know, the first draft is consistently of a certain quality and occurring at a very high frequency, and so sort of reducing the manual labor. Doesn't take out the human from the loop, which I think we should come back to as a topic here a little bit more, but aids. That's how it fundamentally the technology speeds up and creates efficiencies all across the value chain here in the semiconductor industry.

Moderator

Maybe, to one question, maybe to make it a little bit more concrete. You know, if you're imagining, you know, you're a technician in a fab or a chip designer, like, you know, how might you be using these co-pilots or these systems today, and what would you expect kind of the evolution looks like over the next year, 3, 4, 5 years?

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Well, I'm smiling because, you mentioned, say, a technician or somebody who has to make sure that the equipment is running, flawlessly. And again, AI in applying for, say, what we would call traditional, maintenance or predictive maintenance is not new, right? But what, most people, who are actually practitioners would attest to is, yes, I have this fancy new gizmo, but really how I run my, factory or plant is really through a spreadsheet. And it's, it's a highly dependent on the, the manager's skill set to go in and, say, what the machine is doing, what the factory is required to do, and then they basically decide on what, faults to go after or not.

In fact, standing metric in pretty much most industries is if you have an automated system to do fault diagnosis, about 90% of the faults are ignored by the practitioners on the floor because they don't trust whatever the machine is telling you, right? The significant change here is now you can start doing the same kind of predictive maintenance, but not just look at one piece of the data that is coming in, but look at the holistic plant data that's coming in, including things like chat messages between different operators, that they are actually going in and looking at what is going on with the machine. All of the machine data that is coming in, all of this unstructured information can actually be put together in a way that's starting to give meaningful inputs back to the decision makers.

The previous world required a lot of structured information, and the problem with structured information is you can only look for things that you already know. In the real world, things evolve in a very unstructured way. What I mean by that is something goes off, the way the environment is being operated, how the humans in that environment are actually reacting to it. They are actually generating a lot more information than what is in the structured world, which is just measuring information on a particular piece of equipment, right? Before, it was not possible to actually connect the dots there. Now it is starting to become possible.

So that's really why, I want to be cautious here a little bit, saying it's not like all of these problems have been solved with generative AI yet, but then the potential to get solved is significantly more, right? And, that's really what we're starting to see as, a cusp of very similar sounding use cases, but in a fundamentally different way compared to what it's been done in the last decade or two. Okay.

Pierre Ferragu
Managing Director and Global Team Head of Tech Infrastructure, New Street Research

Yeah. I think what you both, Suchi and Arun say, resonates a lot with the perspective we have on the matter. So one thing is, as you say, Suchi, it's not taking humans out of the group. I think it's a very important point to say when we speak of AI. I always like to say artificial intelligence chose a very bad name because naming it intelligence makes you feel like it's going to replace you.

Suchi Srinivasan
Managing Director and Partner, BCG

Yeah.

Pierre Ferragu
Managing Director and Global Team Head of Tech Infrastructure, New Street Research

It's just a tool. It's very important to remember AI is a tool. Then the second thing that I think is very interesting in the way we think about it is how difficult it is to get AI into a workflow, into a process, into something. And so one thing we believe is that the AI opportunity is structuring out as a pyramid with, at the very top of it, gigantic models and a very small number of players. But we really expect the industry to grow with a lot of diversity in which almost each use case requiring very, very specific development, very, very specific integration work. And this is going to be fascinating.

And then the last thing is that what you just said, Arun, I think is very important, this idea that we had AI last year, but we... Something changed. And I think the thing that changed is that AI moved from giving you, like, a percentage probability that there is a malfunction that everybody ignores-

... to having like a conversational relationship with a tool that gives you some hints about what's going on. And instead of having a 90%, you chat with a tool to whom you could ask, "And what do you think John would think about this risk of failure? Do you think John would agree with you?" And then we don't know how good the generative AI will be at answering the question, but it will answer the question. And I think it's a massive, massive, you know, catalyst for adoption of AI.

Suchi Srinivasan
Managing Director and Partner, BCG

Yeah. Sorry, Arun, go ahead.

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

No, go ahead.

Suchi Srinivasan
Managing Director and Partner, BCG

Maybe just picking on one point, Pierre, as you spoke about those three things here. I think the democratization of the technology, absolutely, the conversational interface has led to the ability to even non-technical people to be able to interact with these systems in meaningful ways. I think secondly, there is also, as the technology stack is maturing, currently it is very dynamic, of course, and it will continue to see quite a few evolutions before any kind of winners and losers are decided. However, as that technology stack matures, already there is a great deal of emphasis on the tooling to help this integrate into existing IT infrastructures. So we do see a lot of investments. We do see a lot of activity to make that more intuitive.

What that means is, over time, we do expect these things to become quite standardized with standard APIs to be able to integrate into enterprise IT systems, which also will make the sort of proliferation of these systems easier, right? Again, we're not there yet. This is, this is a point of time in the future, but we anticipate the investments will be directed towards that. Arun, go ahead.

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Yeah, absolutely. And the two things you mentioned, Pierre, was interesting, right? So where AI needs to get adopted, and there's still significant barriers. So there are two barriers that's always existed. One was the technological barrier in terms of getting the software plumbing, if you may, enacted into enterprises. The second largest biggest barrier was talent, which was how do you make this talent pool that is super scarce, actually scale across all of these enterprises? And I think the biggest unlock is the second pool, where you don't necessarily need an army of super scarce talent. You still need talent, but then you can leverage the existing talent that enterprises have, augment that with a few standardized providers, as Suchi mentioned, and then the work is still required for the software piece.

But then that's something that a lot of enterprises and there are a lot of providers who actually know how to go do that. And that gap that is being bridged is what is going to then activate quite a few use cases across the board as well. And we like to think of that as AI everywhere, right? So just think of that like going in and expecting a copilot to be with you, which is specific for your company or specific for even your domain, is something that we see as a near-term reality across multiple industries.

Moderator

One question I wanted to come back. Suchi, you made the point around, you know, in investments and how this ends up, you know, kind of folding into the standardized, kind of the IT infrastructure, IT stack, et cetera. Are... This is actually both Suchi and Arun. Are you all seeing... I guess, where are we in the adoption cycle? Like, how are enterprise CIOs, et cetera, thinking about like, you know, budget? How are they thinking about, you know, the kinds of, you know, experiments they are or aren't running? It'd be just great to get a sense for that.

Suchi Srinivasan
Managing Director and Partner, BCG

So maybe I'll get us started with some of the facts and figures. You know, we did some studies on this quite recently. From an adoption point of view, we are still pretty early in the adoption curve, right? While we see a very large percentage of enterprises, you know, 50%-60% of them, who are testing foundation models in one way or another across POCs, very few, actually less than about, you know, it's about 9%, who've actually managed to scale, you know, across multiple use cases and into sort of production-grade systems. So that's sort of the state of play out. You know, there are lots of reasons behind that, you know, including sort of the state of technical fragmentation and, you know, sort of the preference for enterprises to retain their optionality.

Now, from a spending point of view, though, we find that, CIOs and executive teams across enterprises are absolutely realizing the importance of this technology in their future technical plans, in their compe-- in building competitive advantages. And so, you know, investment in foundation models is expected to double as a percentage of IT spend in three years. And so, you know, we think it's approximately, you know, the latest studies that we did show it's approximately 2% of IT spend by 2026 will go towards sort of foundation models. So it's not a, it's not a small amount, but there are... I'll just leave a placeholder for, there are non-trivial issues yet to be solved on the path to scale, which will see this number accelerate even more in our projections. Arun, maybe some observations from the ground level.

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Yeah, I can give some personal anecdotes as well, having been part of the cloud adoption story in the enterprise over the last decade, and also trying to get the AI adoption going in enterprises for more than a decade now. This is faster than anything that I've witnessed, right? And what I mean by that is... Like really, the mass market adoption started only maybe last December with the release of ChatGPT. Before that, a lot of people were playing around with it, but really there's not a significant shift to let's go figure out how to adopt it in our enterprise. From there to now, we're already starting to see a significant portion of the senior leadership in multiple enterprises starting to do vendor selection.

So they're trying to do their experiments, but as they're doing experiments, they're already in the state of vendor selection, let alone starting to deploy some of these applications and products, right? And this level of pace, I would say, is easily 2-5x faster than what happened even with the cloud. And part of the reason also is people have access. People have also been, I'll say, conditioned, because they've invested in data for over a decade. So there's already pre-prepared to get started, but it also shows, the level of access people have to get started on these kinds of capabilities, this quickly.

Moderator

Got it. Interesting. Are you... Is, you know, one question on that comes up a lot, at least in some of the work we do, is on, you know, value, the creation and capture realization of value by end users. Yeah, are you seeing that folks have high kind of thresholds right now where they need to see value immediately? Is there still sort of experimentation and they're learning? Like, what's the gamut of kind of what's happening in the market?

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Maybe I'll take that question first, Suchi. So there's, it's almost a bipolar disorder like in the market today, right? So because when people get in, they think that finally there is a silver bullet. We're gonna go sprinkle the generative AI thing on anything that we have that we have not been able to solve, and magically things will get solved. Unfortunately, they get very disappointed. And in fact, we've seen enterprises after enterprises where there is a team or multiple teams that have gone and done a small scale POC. They've taken five documents, 10 documents, gone and thrown it in there, and then all of a sudden they can do search or generate content that's significantly better than what they can do.

Then invariably, somebody in the company asks, "Okay, this is all great, but then my business unit has 1 million documents. How long will it take you to process it? How much is it going to cost me to do that?" And then the answer becomes, "Oh, $2 million. Come back after 12 months, then maybe you'll get there." And then the discussion, like, completely stalls there, right? And we're also starting to see enterprises where, okay, we've hit a wall, but then how do we get to the next level? And you can get to the next level, but it requires some methodical thinking. And it's not easy to go from having a model to getting to a business outcome.

And the main problem that we are trying to educate the industry as well, is everybody's conditioned to think of a model as an end in itself. Really, a model is a means to an end. Most of the business outcomes require multiple models to come together in a way that is specialized enough to get you to your outcome. And not only get you to your outcome, but what we mean by outcome is also a sustainable outcome. A sustainable outcome is something that you can deploy into your enterprise at a price point or a cost point that is actually sustainable for the enterprise.

And that goes back to your value question of, if you're not getting 10x of value as the total cost of ownership of the model, plus all of the mechanics that go around it, the enterprise's adoption really won't stick. And here we are seeing some cases where it's way more than 10x, but you need to be methodical about it, and it's, very early stages to Suchi's point before.

Suchi Srinivasan
Managing Director and Partner, BCG

Yeah, I mean, some of the specific issues that enterprises we see are encountering include, you know, there is a high requirement, especially, for example, many of the use cases inside the semiconductor industry. I mean, this is true of other industries as well, but a very high degree of trust, explainability, and sort of governance, right? So being able to not just reasonably solve the problem, but like, being very accurate, being able to trace all the data flows, being able to, you know, understand the liabilities associated with this and having trust in the system. These are sort of non-trivial issues to be able to put these technologies in either customer-facing systems or in, sort of, you know, places that require, for example, high security software and hardware supply chain, where things have to be guaranteed.

And so, you know, the that coupled with back again to the fragmented ecosystem, the uncertainty of the level of vendor support. I mean, there are a lot of startups, but, you know, it's not clear, and so vendor relationships, sort of how to view all of those. To some degree, domain-specific models, especially for the semiconductor industry, you know, some of this, you know, it's not about the size of the model, which may work very well for English language and more general purpose queries. But to be able to solve problems that are context specific in this domain, you might be looking at smaller and more performance-efficient, but domain-specific models. And last but not the least, I think when we think about economics of all of this, we see clients really struggling to understand the operational sort of cost envelope.

There is the one-time cost to set things up. There is the inference cost to run the system. How does that? And you know, we're back in the world of a lot of consumption pricing. So how does this scale with usage? And modeling that is actually sort of not, not straightforward, let me put it that way, right? There's a lot of complexity associated with this. And so taken together, all of these factors are... I mean, there's just things to work through. We see clients methodically trying to, you know, build an understanding of these issues as they build out sort of enterprise-wide programs in any way and get to scale.

Moderator

You touched on something that I wanted to ask about earlier. So you hit on this notion of, you know, domain-specific model, models. Maybe just for the benefit of the audience, can you maybe talk about, like, the different classes of models and, you know, what—which, how you basically see that evolving? Like, you know, who, what, which will win in which domains or which areas?

Suchi Srinivasan
Managing Director and Partner, BCG

Arun, do you wanna take that?

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

I can, I can take this one. So in spite of all of the hype for GenAI and the largest of large models, right? So what I typically like to think about is the largest model is maybe comparable to a mediocre high school student. Okay, in terms of cognition, in terms of ability to reason, it's actually way lower than that. In terms of repeating the information that it was trained on, it's probably slightly higher than a high school student, but cognition is way lower. Now, domain-specific models, what I would call a stand on the shoulders of the giants are the giant models, but by themselves they might be smaller, but they don't need to be taught to learn English, for example, or understand English, but they understand their particular domains very, very well.

The equivalent of that in the analogy I was giving was it would be a college student, right? So it's somebody with a college degree in that particular field. But then you're not going to take a fresh graduate from college and then make them into a chief engineer in a particular company, right? So companies will then have to also build company-specific models that take all of the, what I would call, institutional knowledge or the of the company itself, and then bottle that up into a model that particular company's unique differentiation is. Now, that's really where the expert-level knowledge starts coming in. You still need human experts to make sure that model behaves in a way it's supposed to behave, but each of these models stand on each other, right?

So the large, let's call the open models out there, understand language. The domain-specific models understand a specific domain in addition to understanding language. Company-specific models are the experts inside that particular company, in that particular domain. Right, and the size of the models today, rough orders of magnitude, like the largest models, take, they are hundreds of billions of parameters. That's really where they are today, several hundred billion parameters. Domain-specific models can be anywhere between a few tens of billions to 100 billion parameters. Company-specific models can range from anywhere from a few billion to tens of billions of parameters. Because you don't need those large models when you already are relying on the other large models to understand language, right? But it's not one versus the other, it is everything working together, right?

Moderator

Yeah.

Pierre Ferragu
Managing Director and Global Team Head of Tech Infrastructure, New Street Research

I would even push, Arun, your model one step further. When you see what's happening in particular in bioscience, I'm not sure I know similar use cases outside of this vertical, but in bioscience, you come to the point where you develop models that are specific to just a single question. It's not even like an industry model or a company model; it's like actually a-

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Yeah.

Pierre Ferragu
Managing Director and Global Team Head of Tech Infrastructure, New Street Research

A generative model for proteins that is being fine-tuned again, and we are talking fine-tuning across tens of billions of parameters.

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Yeah.

Pierre Ferragu
Managing Director and Global Team Head of Tech Infrastructure, New Street Research

-to the neighboring of an existing protein. And when you look at-

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Yeah.

Pierre Ferragu
Managing Director and Global Team Head of Tech Infrastructure, New Street Research

Personalized medicine, you'll end up having models similarly fine-tuned to one specific medication for one specific person. That's very impressive.

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Yeah, you're absolutely right. Right, so, like, I mean, you get into medicine as a field, then you get into, say, pathology, for example. And then in that particular field, you're talking about even specific classes of diseases, then you're getting very, very specific, and then you get into personalized medicine. What we're starting to see, and again, this is all in very, very early research stages, is the model gets built for a particular disease, for a particular individual, and then that actually gets continuously evolved based on how that individual is reacting to the, the overall treatment protocol, right? And so those are fascinating use cases, so.

Moderator

So do you see, like I said, you're looking out five, 10 years, what do you expect in terms of the landscape of you know, suppliers or folks out there who are providing these kind of generative AI services? Like, how do you think this shakes out?

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

So, we're only starting to see this happening, but then I do strongly believe and hope that this is where it goes, where it is really in the hands of every practitioner out there. And when we say it's AI everywhere, it's truly everywhere. I mean, like, the example I'll give is really mobile phones, right? I mean, just imagine, like, 15 years ago, somebody saying every single person who has access out there would have access to a smartphone in their hands. It's not even science fiction, it was unthinkable, but it happened so quickly.

This is a piece of technology that might actually have an adoption that's faster, and it's also going in and saying, "I want to do something that is super personalized, and I want the ability to create a model that actually does what I need to do and continue to evolve with me." So it constantly keeps predicting outward, but then super personalized for me. And that model understands everything else that went on outside, and it understands what I need to do for myself in a safe, secure, private way, right? That's a significant shift from what is possible today. Right.

I mean, today, if I have to do that, I have to give all my data out to somebody, pay somebody else significantly more, and then, like, hopefully that they'll do what I have to do, and then I also have to be an expert in doing a lot of other things as well. It's that shift that, I, I don't think we have to wait for a decade. It's, definitely something that we're probably going to see in the next five years or even less.

Moderator

Do you think, are you anticipating, I guess, the industry moving to maybe more horizontal solutions, more vertical, like industry-specific or domain or problem-specific? How do you anticipate that plays out?

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Suchi, you wanna take that?

Suchi Srinivasan
Managing Director and Partner, BCG

I do think that, today, you know, we're seeing sort of the emergence of some of these horizontal solutions, right? I'll take call center solutions as an example. That's a very predominant area of application of these large language models, and that's industry agnostic. You know, we see that across the board. Marketing personalization, you know, those types of things are emerging as horizontal use cases. I do think, and years is too long of a time in this technology's lifetime here, so, you know, using my words with caution over here. But in the next phase of growth, we do see more emphasis on coming up with these vertical specific pockets of value. Actually, what we see is a once in a lifetime opportunity across verticals for value chain disruption.

Because the economics of data gathering, the economics of hyper-personalization, you know, what this technology is doing is fundamentally disrupting those economics. As a result, there is an opportunity for players all across value chains within industries to look up and down into adjacencies and see where there is possibility for disruption. We do actually see many forward-looking enterprises who are doubling down on their AI efforts to actually, you know, engage in strategy efforts in that way and experimentation, so we expect those to pick up speed. Many of those will either rely on fine-tuned models. Some of those, let's say, more high barrier verticals, I mean, I consider semiconductor, oil and gas, energy industry. These are, let's say, using more esoteric or time series types of data, may actually require domain-specific models to enable this kind of value chain movement and disruption.

So we see intense activity as the engineering problems associated with the technology are solved. So, you know, whether it's hallucination techniques, such as grounding, security and privacy, price, performance, efficiency, by the way. So as these things mature, we expect that experimentation inside domains to really pick up speed, and that's where we see predominant value creation happening, in the next sort of, you know, round beyond the horizontals. Arun?

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Yeah, I definitely agree with that. I mean, to a large extent, the horizontals are already being solidified by the hyperscalers, right? So whether you are talking any one of these hyperscalers, they're doing that. A significant portion of what I would call model-focused startups are also trying to do that, but it's much harder to do that in a horizontal space. But where we are seeing a significant growth is really in the verticals. It's about, I have a specific problem, and that specific problem can potentially be solved with gen AI, but there's a gap. And how do you go address that particular gap? You, of course, need some notions of the horizontal elements, but around security, and say, accurate predicting, but also from the standpoint of traceability, right? So those are things that we're seeing a significant-

Moderator

A few minutes here. I'll open it up. And it was just kind of a general question. You know, what are you most excited about as it comes to generative AI? And you know, what do you think the biggest challenges the generative AI will either create or face in terms of adoption? So maybe I'll, Arun, maybe we can start with you, and then-

Arun Subramaniyan
Former VP of Cloud and AI, Strategy and Execution, Intel

Absolutely. I mean, the what excites me the most really is getting closer and closer to business outcomes with AI, right? So, I mean, like, we've not seen this kind of opportunity in a long time. And getting this in the hands of more and more practitioners is really what excites me on a daily basis. And we're seeing that pull, it's no longer just a push, right? And so that's super exciting. The significant challenges are two. One, of course, is to make sure that we don't call this a silver bullet and say, this is gonna solve everybody's problems all the time. We've got to be super careful about that.

But really, the other one is, that we didn't touch upon much, is really the cost of doing this, like, just the infrastructure cost, the, ability to go just brute force this through is not sustainable, right? So we need to be careful about what is the total cost and how do you make sure that this adoption is actually sustainable over the long term. So those are things that, we are certainly focused on and, continue to work with customers.

Suchi Srinivasan
Managing Director and Partner, BCG

Add on to that, I think those are good points, and I agree with what Arun said. So, adding a little bit from the people element of it, 'cause, it is also about the human in the loop, in the center. I think what's exciting about this technology is it unlocks new, may I say, creativity as well as productivity for humans, right? It's sort of the next frontier of unleashing productivity. And at the same time, this technology, the copilots, if you will, are such that they really force us to take, force us humans to take a deeper look at what is our role, the process in the workflow, how best we can engage, what is uniquely human about us. And I think that's actually very exciting. I don't know that humanity's had such a moment of introspection more recently.

The challenge is actually just the flip side of the same thing, right? That is, there is this danger, if you will, because some of these answers or the way the technology interacts with humans is so convincing and so believable. Anybody who's used ChatGPT to plan a travel, like Marie knows this. You know, there is a risk almost, if you will, of overdependence on the technology and delegation to the technology, which leads to a loss of creativity. So I think finding that fine human balance of what we can contribute uniquely with our skills and talents while using this technology to accelerate that march towards business and human outcomes is, I think, both what is exciting as well as will be challenging for enterprises to figure out.

Moderator

Pierre, your thoughts? Oh, you're on, you're on mute.

Pierre Ferragu
Managing Director and Global Team Head of Tech Infrastructure, New Street Research

Yes. I think I feel I need to second Arun's second point. I have little doubt that over the next decade, we're heading into a world in which we'll each have, like, a car driving in a very automated and efficient way, and there will be a generative AI model behind it. And we are going to have a conversation with our car, a conversation with most of our objects, with our home. We will have medicine leveraging generative AI that speaks the language of nature. And I mean, I love the way Suchi described the way generative AI can get into what we do, what we create, what we produce, and enhance in a very exciting way our capabilities.

We talk a lot about transhumanism and enhanced humanity. I think it's just using generative AI in everything we do will be extremely transformative. To me, the key challenge is really the infrastructure. If we... In a world like that, and the way the technology work today, we will each need, like, at least 1 or 2 DGX servers to run for us all year long. So that's like, you know, going from a world in which we each have, like, a $2,000 PC, to a world in which we'd have at least each like a $300,000 server, which consumes like 1,000 times more energy than the $2,000 PC. So I'm not worried about it.

The innovation roadmaps are absolutely fantastic, and we discussed them a lot today about, like, heterogeneous computing, about like, silicon photonics. So we are going to get there, but I think it's. I am on the brink of thinking that we can't go beyond 2024 without significant innovation to start taking costs down. And that's not because I think we're going to spend less money on it, it's just that we can't spend 10 times more every year on it. And we've heard Kevin telling us that basically the workload is growing 10X every year. So that's really, to me, an urgency to address that problem.

Moderator

Absolutely. Well, Suchi, Arun, Pierre, thank you so much for joining us. To the audience, we will take a 15-minute or so break before we come back with Alex Zhavoronkov, the CEO of Insilico Medicine.

Powered by