Super thrilled to have Tom Siebel, founder, and CEO of C3.ai. We're gonna be talking about enterprise AI this morning. Real quick on disclosures, for any, for all of our full disclosures, just go to www.morganstanley.com/disclosures. With that, let's get started. I wanted to start off the conversation, Tom, just 'cause we had earnings last week, and just get a couple highlights there, and then we'll get into the meat of, what's a pretty exciting enterprise AI story. So you're coming off a quarter where you accelerated subscription revenue growth to 23%. You signed 50 agreements, I think it was 29 pilots, 17 of which were GenAI pilots. How would you characterize the quarter, and maybe what ways was this different from the from the past couple of quarters that you've seen?
And then heading into calendar 2024, what's the state of the demand environment for C3 AI?
I think it was a great quarter. I mean, we exceeded. I mean, for those who look at the world from, you know, cell O to row 1,156, they kind of look at the world from there. You know, we and they have their projections as to revenue, cash flow, profitability, and what have you. We beat all those numbers. From those who look at the world out of that, you know, from outside of the spreadsheet, you know, things look pretty good. I think that, you know, when we got started in, you know, enterprise AI, it was January of 2009. Okay, I remind you, this would be before Microsoft Azure, before the existence of Google Cloud, and before the count of $7 trillion market cap of public AI companies today, $7 trillion.
Before any of that happened, we started building a stack, software stack, and spent better part of 15 years and, you know, $2 billion building a software stack that would enable organizations to design, develop, provision, operate, massive scale predictive analytics applications. This is AI for business, right? Stochastic optimization of supply chain, demand forecasting, customer churn, production optimization. And so now we get to... And we talked about enterprise AI for, you know, some years. And those who looked at the world from, you know, the perspective of column AO, row 1,152 of a spreadsheet, there was no apparent-- you know, there was a lot of speculation as to whether there would be a market for AI.
Well, I think that speculation is largely dissipated, and I think it's generally acknowledged that this AI thing might not be ephemeral. And so, as we power into 2024 and 2025, we see a huge addressable market for enterprise AI applications. We are, you know, the only company in the world, I believe this is a true statement, that has 40 turnkey enterprise AI applications in the market for utilities, for oil and gas, for manufacturing, for healthcare, for gov, for defense, for intelligence, what have you. And, I mean, it's not a bad position to be in. Now, as we've, you know, we transitioned our, our, our go-to-market model, as many of you are well aware, from a subscription-based pricing to a consumption-based pricing model.
Over the past, say, eight quarters, we've seen revenue growth rate, top-line revenue growth rate, initially, compress as we predicted. So it went from, like, I think in Q3 2022, it was 42% top-line growth. Not bad. Then 39, then 25, then seven, then -4, then zero. Now, as this consumption pricing model kicks in, we're now seeing a return to growth, and, so then we went from 0%, this is year-over-year revenue growth, to 11%, to 17%, to 18%, and now we think it'll accelerate going forward. And for those of you who have your models, if you do a linear regression between C3 AI, you'll find they're pretty highly correlated. So I think things look pretty good.
Of AI opportunity, what does it mean? I think most people in the room have, you know, their first touch point was with ChatGPT, sort of on the consumer side. But Tom, I was wondering if you can give us a sense of what are enterprises trying to achieve with this technology? What are some of the use cases that they're trying to pursue, and what's their overall ambition here with generative AI?
Well, this is a big one. Okay, this is a... You know, by the way, there's a book out there by Stephen Wolfram. Some of you will remember his name from WolframAlpha, Mathematica, Wolfram Research-
Mm-hmm.
from the University of Illinois at Urbana. The guy's a real deal, and he wrote a book, it's about 90 pages long, called What is ChatGPT and How Does It Work? It's not really about ChatGPT, it's about large language models. It's 90 pages long. It takes 1 hour and 15 minutes to read. I'll give you a money-back guarantee on this book. I mean, it is a really good one in terms of what these things do, because they actually do very little. Okay, but they do it, and the other part of it you'll find fascinating is nobody has any idea how they work. Now, that being said, they really do some meaningful things.
So as it relates to B2B, where I have spent my career in B2B enterprise computing, I think that what these large language models do is fundamentally change the nature of the human-computer interface in these large and complex applications. So when we deploy these applications, say, at the department of, you know, stochastic optimization, the supply chain for TRANSCOM, that would be contested logistics for TRANSCOM, or an AI-based predictive maintenance at the scale of the United States Air Force, and this would be one of the largest enterprise AI applications in deployment on Earth. Now, these applications, you know, these user interfaces look highly technical, like your Bloomberg Terminal, or like an SAP application, or like a Salesforce application, or like a Siebel application. They're available to the expert in that domain.
Now, when we deploy on top of these complex applications, generative AI, the user interface basically looks like the Mosaic browser. Everybody knows what the Mosaic browser is. It spun out of the University of Illinois in 1993, and you know it as the, as the Google browser because Google copied it. But instead of these highly technical user interfaces, and I think the Bloomberg example is a perfect. Bloomberg is a perfect example. I mean, does anybody in the room know how to use more than 10% of that functionality? I doubt it. I've never met anybody who claimed to be able to use more than 10% of the functionality there, which is, which is really rich. But then, the user interface simply becomes the command line from the Mosaic browser. Ask the question, get the answer.
Okay, what's the long position in Morgan Stanley today? Okay, you know, well, you know, what's the correlation between Morgan Stanley prices and, you know, and, you know, unemployment rates in Nashville? Okay, and you get the answer right away, rather than having to hit all the command lines. So, now, that being said, so I think it, so now it changes the change management associated with AI. So now, this idea that everybody in the future has to become a data scientist is a bunch of bunk, okay, in order to do their jobs. Those who use AI well, I mean, you won't even know that you're using AI. Let's take, by the way, we said Microsoft, what these guys did with Copilot, I mean, this was an act of genius, right?
Well, you're using, you know, you're using their IDE to program VS Code, and as you program, the thing is checking your syntax for you, or it's writing a subroutine for you. You don't even know how to—you don't know how to use AI. You don't even know that AI is happening. You know, as it relates to what we're doing in places like the United States or the Air Force, and somebody might ask a question like, you know, what are my aerial weapon systems in the United States Air Force? What are my readiness levels for the F-13 squadron in Central Europe today? Relevant question. Okay, what is my cost of operating the B-1B program for each of the last five years? What are my biggest part shortages in the F-35 program? Well, it just gives you the answer.
Now, that general or that private doesn't know anything about learning models, doesn't know anything about deep learning, doesn't know anything about machine learning models, but he or she gets the answer. This idea that that we're gonna have to retrain programmers in New York City to be, you know, data scientists, it's a bunch of bunk. I mean, what's the user interface gonna look like for the taxi driver in New York City? I mean, this is it. You tell him or her where to go, what's the fastest route, where's the place to go hang out to get your, you know, you know, your next big ride? Go to Columbus Circle. That's where the money is. Okay, and so we'll be using the applications that we know how to use.
In the future, when you use Bloomberg, it's not gonna look like it looks today. I mean, come on! You'll just have a command line, and you'll ask the question, you'll get the answer. Now, there are other applications that are really pretty fascinating for... That are a little bit different than that. Changing, I believe, you know, for example, we have a law firm, a large law firm that you all know, okay? And we're training the learning models with the corpus of S-1s in sec.gov. That would be all the S-1s that have ever been filed, right? So then when we're gonna take, I don't care, what public company, public Stripe-
Mm-hmm.
Public, would you which, you know, we type in the name, address, the financials, the key risk factors, hit the carriage return, and it generates the first draft of the S-1 in, like, 15 minutes. Hey, that used to take, you know, 10 associates three weeks. Okay, all right, now, have we replaced the jobs of the associates? No way, no how. We're just putting them on, you know, higher value applications, higher value use cases. We have a steel company that's using... Oh, Baker Hughes. Baker Hughes is using generative AI on top of ServiceNow and on top of Workday, where we've loaded the corpus of data in ServiceNow and Workday. This has to do with their HR systems.
I think they have 58,000 employees around the world, but I could be off by a factor of two. So you know, just, you know, whatever I said, just look it up and find out what the truth is. But I think it, it, it's roughly enough. I mean, and these people are in Pakistan, they're in India, they're in, they're in Saudi, they're in Qatar, they're in Houston. And in all of those places, we have different HR policies, different insurance policies, different compensation plans, different vacation days. In some places, it's, you know, it- it's Hanukkah, and so the other place, it's Ramadan. Okay, now, so now with this, they have a mosaic user interface where any employee can ask any question about any HR issue, any place in the world at Baker Hughes and get the answer. You know, what are the payroll dates?
What are the vacation dates? What are my benefits? You know, what doctors are in plan, on plan? And it's, you know... I mean, I don't know how many countries Baker Hughes operates in, but it's, you know, it's a stack. And so we're seeing, you know, very creative uses of, Riverside County. Riverside County-
Mm-hmm.
In Riverside, California, down by L.A. Okay, no, actually, I think it's the City of Riverside, I'm sorry, is loading all of their information related to zoning. Zoning, ordinances, statutes, taxes. So a citizen can ask any question about, you know, "Where can I put my fence? What are the setback for the neighbor?" Okay, "What are the issues?" And it gives you the answer right away. So pretty creative uses of enterprise, excuse me, generative AI. Generative AI, people, is a genuine big deal, and so hold on to your socks.
Let's create the nexus between generative AI and then C3. I think quite famously, I think you quoted this in calls in the past, 60%-70% of AI initiatives never get to production, right? And so we have a plethora of foundational models.... What's C3's role in helping customers actually leverage generative AI and getting into production? And why would customers use C3 AI versus other?
Oh, I think closer to 95%-
Mm-hmm
- of enterprise AI efforts never make it to production. They all die. I mean, their AI is where large, complex IT projects go to die with the systems integrator partner of your choice, okay? Over, you know, $a few hundred million and a few years later, they die. Now, let's look at these large language models. Because there are very, very real problems associated with these large language models, okay? When we... Number one, they tend to be unimodal. Unimodal means we can use any data we want as long as it's text. Okay? And now Andrew Ng, a name that many of you will know, we'll talk about multimodal. Multimodal means, and his definition of multimodal is text and images. Well, that's not gonna cut it, okay, multimodal. We need to be omnimodal.
You know, so if you're gonna do a Dow, a United States Air Force, a Bank of America, you need all of their data: structured data, unstructured data, telemetry, voice, images, what have you. And so it needs to be omnimodal. Secondly, when you deal with these large language models, those of you who've dealt with Bard or Chat or ChatGPT or whatever it might be, Gemini, oh my God, the answers are stochastic, so that every time you ask a question, you're gonna get a different answer. And if two of us ask the same question, we get different answers. There's no way to enforce data access controls using these large language models.
You know, when we deal with CIA, NRO, United States Air Force, Bank of America, Morgan Stanley, okay, I mean, these people are kind of sticky about their data access controls and who has access to what. You ask the question, you get the answer, you have no idea where it came from, so they're not traceable. You can't tell where the answer came from, which is a real problem. We have this problem, kind of well-documented by Samsung, called data exfiltration, where all of a sudden our intellectual property ends up on the public internet. And in a lot of organizations, like any customer you know or any of your organizations, that's problematic. It's been well-documented, the problems that these, you know, the vectors that these LLMs are creating for cyberattacks.
They're just wide open, okay, to be able to get at personally identifiable and confidential information and intellectual property. We have this problem that's just starting to be documented now with IP liability. See, New York Times and what have you, where, you know, everybody's going to get sued over these large language models because they tend to be trained on the internet, and you or somebody else owns that intellectual property. The solutionation thing is absolutely wonderful. I mean, you can ask questions. This is gonna make up history, it'll make up kings, it'll make up wars. I mean, it's really, it's really pretty cool. It doesn't know the answer, it makes up a lot of BS.
Okay, and finally, all of these, all of these approaches, be it from Anthropic, be it from ChatGPT, OpenAI, okay, Mistral, okay, they tend to be LLM-specific. And in, you know, March of 2024, 2024, who's going to bet on any one LLM when these guys are spending billions of dollars out innovating each other every day? So for any one of these reasons, okay, this product does not get installed, and this dog does not hunt, okay? Doesn't get installed at the Army, doesn't get installed at the Air Force, doesn't get installed at Bank of America, and it doesn't get installed at Unilever. No way, no how. So we need to solve those problems. This is where C3 comes in.
C3 AI spent 15 years building an AI platform for building this platform architecture called the C3 AI Platform, for building, designing, developing, and provisioning massive-scale predictive analytics enterprise AI applications. Guys, I've been in the enterprise AI business for now, excuse me, enterprise software business for, I think, four decades. I mean, we have a data access control, nailed. Identity, nailed. N-factor authentication, nailed. Omni data, you know, okay, omnimodal data, I mean, the places like U.S. Air Force, I mean, we ingest structured data, unstructured data, telemetry, images, you name it. There's nothing that we can't ingest. And so we've addressed every one of...
You need to address every one of those hobgoblins, and by combining the LLM, okay, with the C3 AI Platform, we're installed today at places that nobody is installed: CIA, NRO, ODNI, United States Air Force, U.S. Marine Corps, Boston Scientific, Con Ed, Riverside County, you name it. So that, that's the—so that's what's very given... By the way, we'll use any—we use any large language model the customer wants to use. If they wanna change it next week, just change it next week. So we've, we've solved that problem, and so, you know, I don't know how many places we're installed today, but it's a large number, and it's growing very rapidly. But we've solved those hobgoblins that prevent the installation. I mean, in many of your organizations, large language models are banned. I'm certain of it, okay?
They're banned for the reasons that I just told you, and we have solved those problems.
In early last year, you guys launched the C3 Generative AI Suite. What's been the reception adoptions of the C3 Generative AI Suite? And I guess the multi-billion dollar question, how big of a revenue driver can this be? How big of a proportion of C3's revenue can it be over time?
Well, I think the predictions are that this is the generative AI. It's like a $1.3 trillion market in, you know, in, you know, 2020-something. This was retrieved from Bloomberg, Bloomberg, Goldman Sachs, and others. I mean, in the time that we've been talking, I mean, I just happened to do a search this morning on, you know, when we started talking about AI, nobody else in the world was talking about it. I did... What's the combined valuation of the public AI companies this morning? Okay, I think it's between $7 trillion-$9 trillion, so something's changed. So I think this AI thing is very real. I believe that, you know, the game that we're playing at C3 AI, both generative and—and I mean, generative accelerates what we're doing and presents an entirely new market.
We're going to market in generative AI in a big way with AWS, Google, and Microsoft. They make great marketing partners for us. The game that we're playing at C3 AI is to see if we can establish and maintain a market leadership in enterprise AI applications, okay? Globally. Okay, now, I realize that sounds like kind of an aggressive statement, but, you know, we also said that in 1983 and 1984 at Oracle, when Oracle had, you know, 12,000 feet at, you know, at 2710 Sand Hill Road and about 40 employees, okay? And we were gonna establish and maintain a market leadership position in enterprise—excuse me, in relational database systems. Well, how'd that work? It worked out pretty well, right?
Mm-hmm.
And we—who were we competing against? And I mean, a couple of these competitors were legit, like, you know, IBM was perceived as legit then. I don't know if they still are, but they were then. In 1993, we spun out of Oracle, and we said, "We're gonna establish and maintain a market leadership position," okay, "in CRM." What was CRM? Nobody knew it, and everybody was looking from the world from their spreadsheet of, you know, column OA, row 1,153. There was no CRM market. Okay, well, we established and maintained a market leadership position in CRM. I think that we grew the company from zero to $2 billion in revenue in 6 years. I think that was the most. When Oracle—with Siebel Systems, we had 85% market share in CRM.
We did, in fact, establish a market leadership position globally in CRM. When Larry bought the company, I think Siebel Systems had order of $2 billion in revenue. You can look up the facts. There was a little start-up there, a company called Salesforce, that had revenue of order of $200 million. So they were just kinda ankle biters back then, but they did pretty good. So the game that we're playing, we see a large market opportunity here. We're out investing in the market. We're a well-capitalized organization, a great product, great technology, great customer relationships. We're to see if we can establish and maintain a market leadership position in enterprise AI. Hey, you know, we might fail. What happens if we fail? We're number two or three. Okay?
Is number two or three worth $3 billion? I don't think so. So, that's the, you know, that's the game that we're playing, and I think there's some risk we might pull it off. By the way, this week, I will spend three days with five or six or 100 of my best friends in Boca Raton at the C3 AI users group conference called C3 Transform. It is webcast for all of you to see it yourselves. So, who will be there? Holcim will be there, a construction products company in Germany. Dow will be there, Baker Hughes will be there, United States Air Force will be there. All of our customers will be there. The C3 people will talk about what is the product roadmap going forward.
All of the customers will talk about their experiences in using the product. I mean, go to the website, log in, join whichever event you're interested, whether it's defense, whether it's intelligence, whether it's manufacturing, and listen to the companies talk about the economic benefits they're getting from these applications. Then you decide if it's real, okay? There's two ways you can look at the market. You can look at the market from the person who's looking at this from column OA, row, you know, 11,521 of the spreadsheet, or just research it yourself. And so we're making all the customers and all the C3 executives immediately available to you this Wednesday and Thursday. So I encourage you to join us.
Stay tuned for C3 Transform this week. It should be good. You started the conversation talking about 40 different production- grade applications across various industries. And I wanted to ask you, like, what's gonna be the plan for those applications? Are they themselves going to be infused with generative AI, AI, or are you gonna integrate them with a generative AI suite? And maybe more broadly, what's the journey that you expect a customer to follow? Do they land with C3 Generative AI and then start adopting some of these, you know, out-of-the-box applications? How do you see that journey of a customer unfolding going forward?
When we changed... It's a great question. When we changed our pricing model, I think about eight quarters ago, we became much easier to do business with.
Mm-hmm.
So the opening gambit was in order to do business with us, it was like, you sign a license agreement for $10 million-$50 million. It was good work if you could get it, and we're a private company, and, you know, it saved us having to ring any doorbells, ring doorbells on Sand Hill Road to, like, finance the business. But, you know, when you're doing deals at that, at that scale, you're doing maybe five or 10 transactions in a quarter. Now, we're doing 50 in a quarter, and soon it'll be 100. So what is our approach to the market?
Our approach is, we'll bring the application live at name the manufacturing company, name the chemical company, name the lumber company, the stochastic optimization of supply chain, supply chain network risk, demand forecasting, customer churn, process optimization, really, really difficult stuff. We'll bring the application live in six months for $500,000. If you like it, keep it. six months for $500,000. I mean, you know how much are these talks to do with the professional services company of your choice? It's like 3 years and, you know, $50 million. And so we bring it live, and now it might be-- and if it's generative AI, I think it's like 12 weeks and $250,000, we bring it live....
and then people, if you like it, keep it, and then pay, I know, $0.55 per CPU hour or something like that. Okay, but what happens is, many of the people, I think the majority, actually, of the organizations that are using our enterprise applications, decide they want to use generative AI, too. And many of the organizations started with the generative AI application, decide they want to use our enterprise applications also, so they definitely feed each other.
Mm-hmm.
There is definitely a standalone generative AI market, like I talked about with the law firm, like generative AI for Workday, generative AI for Salesforce, generative AI for ServiceNow, generative AI for SAP, generative AI for HR, generative AI for manufacturing. I think we have 28 products in the marketplace. But it's very easy for somebody to start. We bring them live in three-six months. Our retention rate is very high, and they tend to then buy more stuff from us. So it's been a very successful model.
Mm-hmm.
Okay, we've seen it in the diversification-
Yeah
... of our
Bookings.
Of our bookings. Goodness, we used to be, like, 70% oil and gas. I think oil and gas is, like, 1% last quarter of our bookings, if I'm not mistaken.
Yes, that's right.
Largest segments that have come out of nowhere are what? State and local is huge. Okay, federal is huge, manufacturing, what have you.
We, the last piece I wanted to touch on, the role of C3 AI Platform. So we've had generative AI, we have the applications. What's the future like the platform? 'Cause I think it's, like, the smaller piece of revenue. How do you see that? What role is that gonna play in C3 AI's growth?
The platform, so we started and built it... We spent, you know, over a decade building the platform that allows us to design, develop, provision, and operate these massive-scale enterprise AI applications in these segments that I've talked about. Now, the beauty of the platform, it enables us to rapidly build these applications for the value chains of health, defense, intel, utilities, oil and gas, manufacturing, consumer packaged goods. Now, what's counterintuitive is whether we're doing customer churn at Bank of America, prediction of hypersonics coming at the planet, this would be something coming at, say, between five and nine times the speed of sound, okay? And for a Missile Defense Agency, we would predict, like, are they gonna land in the Pentagon or are they gonna land in the Atlantic Ocean?
When they come at nine times the speed of sound, you don't have a lot of time to make that prediction. Whether we're doing, you know, demand forecasting for somebody making asphalt or predictive maintenance for somebody running pharmaceutical manufacturing. 75% of code in all those applications is absolutely identical. That's the beauty of the platform. I mean, really, the code is identical across all those use cases. The predictive maintenance application that I have for the Air Force is the same code that I'm using for, you know, olefin production optimization at or asphalt optimization at Flint Hills Resources. And so it is a really key asset.
The idea that all the source code is identical across all of those use cases, guys, this is a big deal, and this includes generative AI. So this is definitely a platform play and a big one.
We have about 10 minutes left. I wanna hit on the federal opportunity, but then, you know, after that, I definitely wanted to take questions from the audience. So, you know, after we hit federal, just raise your hand, and then the microphone will come over, and you can ask a question for Tom. So the federal opportunity, I think, you know, a buzzword the last couple of years has been defense tech, and there's been sort of a lot of excitement in federal. Can you frame out the. Well, I think from a macro view, explain what's going on with our federal investment in, you know, defense tech and software more broadly, and then sort of give us an update on C3's progress over the last year or two.
Well, I think the best book on defense tech is The Kill Chain by Christian Brose, and he talks about what the next generation of warfare looks like. So rather than move multi-billion-dollar things to the other side of the planet at, like, 30 knots, we're moving, you know, billions of $1 things to the other side of the planet, the hypersonics, and they're being coordinated in swarm. So this would be generally defense tech. Now, we play in a relatively small but very, very important aspect of the kill chain, which is the use of AI to basically optimize all these systems.
I mean, it's generally acknowledged that whether we're dealing with hypersonic swarms, subsurface autonomous vehicles, a multi-domain command and control, contested logistics operations, intel space, that you know, we need these complex predictive analytics applications to optimize the operation and the coordination of these systems. So there's definitely a war going on in AI between the United States and China, and that, you know, it's been going on for probably about a decade, and now it's accelerating. It's clearly recognized in the Department of Defense, okay, and in the White House, that you know, AI is a strategic imperative for the tech defense and intelligence community. I think much of the last 14 years, they spent... They're doing the same thing that they're doing in the private sector, is trying to build these applications themselves.
At one point, they had 600 independent efforts to try to build a platform like C3 AI. Exactly how many of those efforts do you think succeeded? Well, the same number that have succeeded at it now, Shell, ExxonMobil, and Chevron, that would be zero succeeded. So now that they've adopted, they've decided to adopt, you know, commercial, off-the-shelf, tried, tested, proven production technology like C3 to operate these systems, and this bodes pretty well for us. So we have, we're involved in contested logistics and supply chain, contested logistics at DLA. We do a lot of work for the Missile Defense Agency, NRO, which is associated with the operation of the satellite systems. Predictive maintenance, I've talked about that.
So, we're very much involved in helping them invent the war fighter of the future and kinda survive, you know, the next engagement, which hopefully will be later rather than sooner. But, you know, there's a lot of investment going on in that area, and to the extent that we have the opportunity to serve, we're privileged, feel privileged to do so, and those opportunities seem to be growing quite rapidly. I think our federal business was up something like 100% year-over-year. Do I have that right?
Last, I think, the previous quarter was up well over 100%. Last quarter, I think it was up 87%, so-
100% quarter-over-quarter?
100% year- over- year, I think-
Oh, okay
... in, in, in-
Thank you.
Two quarters and 87% last quarter, so-
You keep me out of trouble. Thank you.
Yeah. No, either way, I mean, hypergrowth, right? And maybe just like a quick comment there. So you're seeing exceptional bookings coming out of federal. Is that more dollars coming into federal, or is there, like, a share shift from some of the incumbent software providers to C3 AI? Like, how would you... Like, what's driving that exceptional-
Well, aside from all the rancor going on with the budgets that we hear about, that we hear and read about every day, you know, I don't think there's any limits on-
Mm
... what they can spend in the Department of Defense and the Intelligence Community. So those budgets are kind of wide open. And, you know, they are definitely investing in, you know, reinventing the, you know, everything they do with the management of the kill chain, and we know we're involved in actually quite a bit of that.
Awesome. If there's any questions for Tom, please raise your hand and wait for the microphone. Got one up front, and then we'll go to the back.
Tell me who you are? It'd be great.
Sure. Andrew Johnston from LMR Partners.
Hi.
I guess two questions. One, why do you think that you've had more success with, like, state and local businesses versus, you know, kind of mid to large-sized enterprises, where I'd feel like budget is readily more available for, for AI than it would be for, like, state and local? And then the second question is: Why aren't you growing significantly faster than you are, given this kind of opportunity? Is there any sort of offsets we should be thinking to your growth over the midterm that would... I mean, obviously, this opportunity is pretty large, as you have kind of articulated. I just want to understand why the growth is not dramatically higher.
Well, I think there is opportunity for growth. We made a decision, eyes open, to depress the growth rates. I mean, when we switched from doing business $10 million, $20 million, $30 million, $40 million, $50 million at a pop to $250,000 at a pop, you know, revenue declined, okay? In the short run, revenue declines. Now, in the long run, it accelerates. I think, you know, I think there's some probability when we get to the other side of this curve, where we're gonna see dramatic revenue acceleration. Now, we see this growth in GPUs at, at NVIDIA of what, 400% year-over-year? Okay, growth in revenue of what, 260, 260%? You guys know these numbers. I don't, okay? But I think I'm directionally right.
I mean, what we see going on at AWS and Google Cloud—what's going on at Google. I mean, guys, this infrastructure is not being laid so people could, like, do social media. I mean, yeah, I mean, yeah, it is, okay? You know, are they gonna all do, you know, deep fakes and, and, you know, and kind of weird pornography using this stuff? Yeah, they will, okay? But is that really, you know... But is, okay, okay, is that really what this is about? It is not. I mean, these guys are laying the framework to run business-to-business enterprise applications, and that... I mean, I think it's a pretty good leading indicator for our business. So I have grown... You know, I'm not entirely unfamiliar with growing, rapidly growing companies. I have grown the fastest-growing enterprise application software company in history.
So, you know, maybe we have some plans, and we're doing this methodically. Okay, we're leaving a line of satisfied customers in our way. You can listen to them this week if you have time. And, you know, stay tuned.
Can we go to the back?
Hey, Tom. Tomas Amil here with BlackRock.
With BlackRock?
Yes.
Hi.
How are you? Think about GenAI being used on the implementation side of your business, a kind of opportunity to accelerate deployment and kind of, like, improve the ROI equation for, for your customers.
Where it improves the AI-
On the implementation-
Where it improves the ROI-
Yeah
... is in terms of the ability to make the technology broader to broadly adopted throughout the enterprise. These sophisticated predictive maintenance applications, production optimization applications, HR applications, ER applications, only specialists can use them, okay? Well, now, when the user interface is like the Mosaic browser, everybody in the company knows how to use it. The kids know how to use it, and their mothers and fathers know how to use it. So I think it makes the technology, it makes us able to handle the change management problem, really simplifies the change management problem, which was going to be the constraint, okay? What's going to be the constraint on the AI market? Guys, it's not GPU capability, okay? It was change management.
I think the constraint on the AI, aside from the AI market, it's going to be availability of power. It's not going to be availability of GPUs, it's the failure of the power infrastructure to be able to, you know, to power these data centers.
Any other questions for Tom? Up in the front, please. This will probably be our last one.
Vishal Patel, Scotia Global Asset Management. Great presentation. You're an optimist. You say you believe in the AI opportunity. Is there anything that you don't believe in or that you think is overhyped or, you know, what don't you believe in longer term?
Was the internet overhyped in, you know, in 2000? Was it? And we had a market correction, and some of you were there for the dot-com crash. How big is the internet compared - today, compared to what it was in December of 2000? I mean, three orders of magnitude larger. Was the internet overhyped in 2000? No freaking way, okay? Okay, if we look at enterprise AI, is there going to be a market correction in NVIDIA stock? Who cares, okay? I mean, maybe somebody does if you own it, but I don't care. I mean, the, you know, however... You know, is there a market correction in AI? Who cares, okay? I'm not here to manage this business for next quarter's stock price. Not my job, okay?
You guys think about this, you think about this, I don't think about this, okay? I'm here to build a great company. I'm here to build satisfied customers. I'm here to gain market share. And as far as I'm concerned, you can turn the market off for five years, okay? And I don't need the market, okay? And that isn't to say that I don't pay respect to what you do, and, okay, because I think I do pay you respect to what you do. But, you know, you turn the lights back on in five years, everything's gonna be fine, okay? And that's the way we look at markets.
Thank you so much, Tom, for coming to the conference.
Thank you, Singh.