We're live? Okay, showtime. Good afternoon, ladies and gentlemen. My name is Tom Siebel. Thank you so much for joining us for our June 2023 Investor Day. The slides gonna advance? Yes, I think they will. I brought out much of our executive team to spend time with you today. I'm joined by Ed Abbo, who founded the business with me, and is President and Chief Technology Officer. Houman Behzadi, who serves as Executive Vice President. My colleague, Nikhil Krishnan, who runs all Data Science, is Chief Technology Officer in product, and owns the generative AI, C3 Generative AI product that he will be talking about. Juho Parkkinen, our Chief Financial Officer. Amit Berry, who runs IR. Amit, Amy is here, our Chief of Staff.
Alex Amato, who runs Customer Services and is responsible for all the customer deployments going on around the world. Merel Witteveen, who runs our Alliances program. Derron Blakely, our General Counsel. Binu Mathew, who runs our Products and Engineering organization. Paul Phillips, who assists with IR, and Henry Murray, who runs Government Relations in D.C. You will have the opportunity to interact with and ask questions of many of much of the leadership of C3 AI. I'm also very pleased that we'll be joined today and by my good friend, Honorio Padrón, who is our customer at Pantaleon, which is a large sugar cane producer in South America. Graham Evans, who has his high school photo there. Yearbook photo.
Graham Evans is our a very strategic partner in the as it relates to the Defense and Intelligence Agency with Booz Allen, and Graham will share with you what they're doing. Later this afternoon, when we get to demos and cocktails, Chris Casey is joining us from AWS. What we want to cover today, and it's gonna be kind of, you're gonna be drinking out of a fire hose. Well, I'll give you a very kind of brief company update, and then we'll get to substance. Ed Abbo will come up and give you a briefing on what is happening in sales.
Honorio Padrón is gonna come up and talk a bit about what it is like to be engaged in a trial, and what it is like to go into production, and what the decision process is like. Merel Witteveen, who is our Group Vice President of Alliances, is gonna talk about our alliance program. Graham Evans will come up on stage and talk about the nature of the very significant partnership with Baker, Booz Allen, and C3 AI. We're gonna take a break, and when we take in the break, we'll have demos going on of the C3 AI generative AI product, the C3 AI ESG product, and the C3 AI Readiness product. There'll be four products there, so the product managers will be there.
You can see hands-on what this stuff does and how it works. What might be the highlight of the day is Nikhil Krishnan, who will come and talk about what we're doing with C3 Generative AI. I think you will find that really fascinating. Binu Mathew is gonna talk about our product footprint and product roadmap, and then we'll bring the entire management team up to answer your questions. This is all being webcast live, which makes us all, you know, Reg FD compliant, so we can, you know, speak quite freely with you.
I'm very pleased that we are joined also by an old friend of many of you, and certainly an old friend of mine, Rick Sherlund, who is something of a legend in the information technology community, most recently as a Vice Chairman at Bank of America. Yeah, Rick really, I think, made his name as kind of a legendary, you know, sell-side analyst back in the days of Microsoft and Oracle, when those companies were just getting started. Rick, if you'll join me, that'll be great. Rick is going to join me, and we are gonna carry on a kind of a brief conversation about what's happening at C3 AI, and then we'll just get on with this event. Thank you again for joining us. Rick, thanks.
Thanks, Tom. long since gone are the days where I was writing research. I read all the research, and as you'd expect, everyone's kind of focused on the transition from the subscription model to consumption, and that's pretty much what people are occupied with for the next one or two, in their modeling. If we, if we look at a little bigger picture and perhaps a little longer term, you've had a long history of being very visionary. You know, with back to Siebel days, you invented the whole CRM category. You wrote the book on digital transformation, and you've been talking about AI for a long time, and you've been very bullish about just how big it's gonna be. Now, I think it's very speculative to say, well, how big is big?
Because it's gonna be, like, really big. If maybe we could start just with the perspective from you about what's going on out there, what's your vision of what all this means?
We started this effort after we sold Siebel, to my friend Larry Ellison. I sold Siebel Systems to Larry Ellison. I recall that joint closed in January of 2006. We started thinking about what was next, we got together with a number of colleagues in the industry, from founders of Oracle, Intel, SAP, McKinsey & Company, Accenture, you name it. We kind of ideated for the better part of a couple of years, thinking about what was next in information technology. We thought next was about elastic cloud computing, big data, Internet of Things, and predictive analytics. Look, guys, I mean, let's think back to 2006. I mean, the AWS was selling like this much in terms of cloud. That's exactly how big the market was.
We believed this was going to be a big market, and we so we came up with this idea that we would build a software platform that would allow organizations to take advantage of those that step function in technology that consists of those items that I've identified. We sent out an email on a Friday, raised $20 million by Sunday, and the company was financed, and we began work in January of 2009. Okay, now, we believed that we have been, you know, at this, you know, kind of very vocally now for 14 years, and particularly the last decade, that we believed this market for predictive analytics applications applied to the enterprise was going to be a large addressable market. You know, we're kind of, you know, that... Not everybody shared that perspective, and I understand that.
Now, you know, fast-forward, you know, we wrote books about it, and we gave keynote speeches about it. You know, we've sold a little bit of stuff into the space in the last 14 years. I think we've sold, you know, maybe booked $2 billion in software, rough numbers. We're able to make some things happen. Now here we are. It's June 22, 2023, and I don't think it's an overstatement. I really don't think it's an overstatement. I think on June 22, 2023, there is not one CEO in the world who didn't think about AI today. There's not one government leader in the world who didn't think about and have a meeting about AI today, there's not one investor who didn't think about AI today.
There's lots of people who didn't think about China, lots of people who didn't think about quantum computing, lots of people who didn't think about Ukraine or climate change today, but nobody didn't think about AI. We've never seen anything like this. I mean, and we've been involved in some large addressable markets, relational database, enterprise application software, CRM, cloud computing, et cetera. Nobody's ever seen anything like this. However big we thought it was, it's one to two orders of magnitude bigger than that. It is. You know, what's going on in the marketplace today is really quite remarkable.
Tom, the machine learning, you know, has kind of been your main business of AI in the past, but now with generative AI and large language models, it creates the opportunity for a new user interface to the product, how you interact with the system, which means you can broaden the number of users. You know, these large language models, I read an article recently, someone said: Well, just because you can memorize the Internet, you know, doesn't make you smart or intelligent. Because of your unique architecture, you've got the logic layer, which is all the AI applications underneath the machine learning. When you ask a question, you've got this layer that goes right into your I presume it's in the vector database, but it goes to the AI to answer the questions.
It's not, in your case, just about a natural language interaction. It actually is very substantive because you have the intelligence to back up the questions that are being answered. If you can kind of talk about, you know, what the natural language means to the company in terms of the UI, but also, does this reduce the friction for adoption in the market as well as people start thinking that, "Oh, we really need AI," even though they don't necessarily understand what that means? You go much deeper than the natural language. You get right into the substance of the AI applications.
If I may, I think I need a clicker 'cause I wanna show a slide, I left it up there.
No, I don't have.
Go backwards? Then go. Okay. This is a billion and a half dollars worth of software engineering, this is the C3 AI Platform, and this is a model-driven architecture. What is different about C3? This is what's different about C3, okay? If you want to. You may or may not be interested when you're done, okay? This is what's unique about C3, is we spent a billion and a half dollars.
Hello?
We can't hear you on the live stream.
Oh, you can't hear me? One, two, three. Are we live?
Yep.
Okay, good. This represents, I know I'd have to count, but some place between 1,000 and 2,000 person years of work. Okay, this software platform, this is the C3 AI Platform, and what we have here in one cohesive architecture are all the software services necessary and sufficient to design, develop, provision, operate massive scale enterprise AI applications. We do this in the Department of Defense. We do it in the oil and gas industry. We do it in utilities. You know, on the left side of this or the left side of this, you see, these are all the data fusion capabilities, where we take ERP systems, CRM systems, SCADA systems, weather, terrain, social media, open source data about unemployment, stock prices, commodity prices, GDP growth rates.
We aggregate those data in a unified, federated image, okay? Then we have a family here. You see all the data persistence technologies, relational, non-relational, key value store, vector databases, what have you. Down below, we have all the platform services that are necessary for applications of this nature, like encryption in motion, encryption at rest, access control, ETL, okay, what have you. Then machine learning services, supervised learning, unsupervised learning, deep learning, reinforcement learning, natural language processing. Then we have on the, on the left side, all the t his is a 3D image of what you see up there. All the, you know, kind of data visualization tools.
We've enveloped this in a family of application development tools, deep code, low code, no code, application development tools, and we've used that to build 42 turnkey applications for the defense industry, intelligence industry, banking industry, telecommunications industry, healthcare industry, manufacturing industry, utilities industry, oil and gas industries, et cetera. As we enter June of 2023 and everybody is interested in how they can use these technologies for Oh, by the way, when we build an application in this environment, any application in this environment, that application will run without modification. Look at the bottom row of this guy, okay? It will run without modification on the AWS cloud, the Google Cloud, the Azure cloud, behind the firewall, and on the edge. Hey, this is a pretty neat trick! So now we have these 42 applications.
As we enter now, the second half of 2023, and this enterprise AI market is exploding, and everybody is trying to figure out how to use these technologies to be competitive in their businesses, or else they will go out of business unless if they do not use them. There is exactly one company in the world that has 42 turnkey AI applications. One company. There's actually one company in the world that's built a platform, one company, okay, that has built a platform that enables these applications. Let's talk about generative AI. Generative AI did kind of change everything, because when this was. We've been working with, you know, Nikhil and his group have been working with generative AI, the data science guys at C3 since maybe 2020 on a number of interesting applications.
We started to work on this in earnest in September of 2022, associated with a request that we got from a customer of Graham and mine in the Department of Defense, who said he wanted Google for DoD. This is the person responsible for all AI platform standards in all the DoD. He said, "Tom, we need to be the Google for DoD. A customer asks a question and gets an answer." I got in a room with my colleagues, Ed and Nikhil, and Nikhil and Henrik. I don't think Henrik's here today, and who's our lead data scientist, and we started sketching out solutions. What does Google for DoD mean? The idea there was to.
In generative AI, I know it looks like a step function to the rest of the world. For us, it just looks like the next step. As this AI has developed from IoT to unsupervised learning, to supervised learning, say, someplace in there, back here, you have NLP, reinforcement learning. Now you have generative AI. What happens next? I don't know. It certainly, when this thing came out in November of last year, from November through to today, it has certainly fueled the market. Everybody's scared. Everybody wants to know what this means. To us, basically, it was just another tool that came available, our platforms allow us to just put it in our machine learning, in our A level. Now we have generative AI.
Now, in terms of what large language model we put in that, okay, in that model, I mean, it could be Chat, it could be Copilot, it could be FLAN-T5 from Google. It could be what is the other real popular one for.
Bard?
No, Bard. What's the other one? LM.
Uh-
PaLM.
PaLM.
PaLM. We don't care what goes in there. We can take advantage of any one of them. What is the effect of this in terms of what we're doing in LLM? Now, I can tell you, these are great companies, okay? We're not inventing the LLM technology. There is $ billions being invested in this by Microsoft, by OpenAI, by Amazon, by Microsoft, by the Academy, MIT, what have you. As these technologies become available and the next one becomes available, that's more powerful, we'll just take. Or it becomes available for the, for the banking industry, we'll just take advantage of it. Okay. That's the way that this architecture works. As these new technologies become available, we take advantage of it.
People want to know, "Gee, how do we compete with Databricks?" We don't compete with Databricks. Databricks is just a data virtualization technology. We have data virtualization. It's right here. Databricks came out of the University of California, right at Berkeley. This project called AMPLab. It was developed by six graduate students in a year. Six years of software, man years, person years of software engineering, reported to the Chairman of the department then by the name of Michael Franklin. Yes, the Thomas Siebel Chair of Computer Science. These guys took that project that was called Spark, and they put it into the open source community, and they came out with a commercially supported version of Spark. They call it Databricks. Great idea, great product, hugely successful company.
Out of the box, our product comes, we happen to use the successor to Apache Spark, which is called Ray, which we think has a little bit more utility. Many of our customers, Shell, DoD, Booz Allen, others, they want to use Databricks. No harm, no foul. Take out Ray, put in Databricks, okay, and you're running. These things that appear to be competitors, they're not competitors. Let's talk about LLM. This will be the last y ou know, I will answer your question, Rick. Let's look at all of these companies. Great companies. Microsoft, Google, Amazon, Accenture, IBM, OpenAI. Not one of them today ships a production software product for enterprise to deploy generative AI at the enterprise level. Not one. Okay?
The way that all of their product architectures work, if you were to install them, they work like this: You can install any data you want. You can integrate any data you want into the LLM, as long as it's text, HTML, or code. The LLM has direct access to these data. What happens to that? Number one, the answers are stochastic. You've used ChatGPT, you've used Bard, you know they're stochastic. Every time you ask a question, you get a different answer. Secondly, there's no traceability. You can't see where the answer came from. I mean, if you know, if you're running...
You want to know where you're the Chair of the Joint Chiefs of Staff, and you want to know where your satellite gaps are in INDOPACOM, you know, and it wanted to give you the answer, you want to know where the answer came from. They're not traceable. Stochastic also, if two people ask the same question, they get a different answer. Thirdly, none of our enterprise access controls at Bank of America, at Department of Justice, at DoD, are enforced, so anybody get access to any data. We have a risk that you've read about, and it's very real, of LLM-caused data exfiltration. See Samsung for details. The LLM has direct access to the data, and it pumps all your IP out onto the internet. In a lot of organizations, that's kind of problematic. Like any commercial implementation, it's problematic.
Finally, all of these solutions, all of this slideware to work, okay, when they ship it in, when they get it into production, it's prone to hallucination. For example, if it doesn't know the answer, it makes it up. I can go on to Bard, and I can ask: Who are the largest institution of shareholders of C3? It'll tell me the answer. The next question, I can ask it a question that it can't possibly know the answer to. I will say: Who has the largest short position in C3? It'll come up with 11 institutions, it'll tell me what their short position is. It's absolute poppycock. It's unknowable. You guys know it's unknowable. It'll just make it so with equal authority, it'll just kind of wing it.
Our solution is a little bit different. We're putting C3 Generative AI on top of a colorful $1 billion worth of software engineering. We are the masters of the universe at data fusion. ERP, CRM, SCADA systems, sensor data, whatever it might be, we can aggregate those data into a unified, federated image. When we do so, okay, when we do so, rather than exporting these, bringing these data into the LLM, we're bringing it into a deep learning model, and that deep learning model is storing these data in a vector data store. Now, we have built a firewall, okay, between the LLM and the rest of this guy. The LLM interacts with us, okay, and these are where the data are stored, and the data are secure.
In a way, we kind of think about this as the brains and this as the memory, and we have disconnected the two. When the LLM figures out what we want to ask, it passes the answer through the firewall to a retrieval model. The retrieval model gets the answer out of the vector data store. It gives us only the answer, and then it presents the answer to us. What are the advantages? The responses are deterministic. Every time we ask a question, we get the same answer. If two people ask the same question, they get the same answer. Secondly, everything is traceable. Once it gives us the answer, we can click on it, and it tells us exactly where the answer came from. It takes us right to ground truth.
Thirdly, all of our access controls at Bank of America, CIA, Defense Counterintelligence and Security Agency, whatever it may be, they're fully enforced. Fourthly, there's no risk of LLM-caused data exfiltration because the LLM has no access to the data. We don't have to worry about IP disappearing into the internet. Finally, the darn thing doesn't hallucinate. If it doesn't know the answer, it just says, "Hey, you know, I don't have access to these data." It's fundamentally different.
The most important thing about the C3 AI Generative AI product is it's shipping into production today. We're installing it today at the Missile Defense Agency, you know, a large intelligence agency that I can't name, Georgia-Pacific, Flint Hills Resources, and some other organizations that I'm not prepared to announce yet. You know, we will have, you know, I don't know, in eight weeks, I'll have, you know, the first six users up in production before anybody ships a product. It's a little bit different.
Yeah, I would argue it's unique in that it leverages the platform which touches data all through the enterprise, no matter whether it's on-prem, hybrid, cloud. If you were to ask Oracle for an answer, it's gonna look at Oracle database. It doesn't have the ability to look at all the data sources that.
It's a small language model. Exactly. We're not connected to the internet, and for these types of applications, nobody wants to be connected to the internet.
Right.
You can basically get. You know, you have access to the information within the Air Force or within Georgia-Pacific, or with Charles Koch, within Koch Enterprises. So you're limited to the firewall. You're limited to what we have to do. I think our time is up, Rick, and I think that with this project on schedule, I'm gonna turn this over to my colleague of now many decades and, Chief Technology Officer and President of C3 AI, Ed Abbo. Ed, please.
Thank you, Tom. Okay, what I'm gonna do is just give you an update from the field on how we're seeing the broad interest in AI translate into sales and the sales organization activity over the quarter. If I start by just, you know, laying the context of what are our field teams doing out in the market? They're basically engaging with prospects and customers and identifying basically a pilot, which is. Think of that essentially as deploying our applications into production. It's a pilot, but it's production use of the application in a limited fashion at a customer.
That application might be using AI to improve demand forecasts, it might be using AI to improve reliability and throughput of manufacturing. The first conversation that we have is: where should we really start? Many customers have appetite for putting AI everywhere, you have to start somewhere, and that's the pilot that we're basically engaging in discussion. That discussion has accelerated, it's roughly, you know, just under four months to get that pilot identified, and basically an agreement concluded. After that, we basically have up to six months to put an application into production deployment, and sometimes it takes three months if we're talking about generative AI. Those are pretty fast, we can deploy those in just roughly 12 weeks.
Other times it takes longer 'cause you're actually integrating many systems and putting something into production from there. At the end of that, we're actually demonstrated value to the customer, so they see the value of improving reliability in their plant, and then we convert that into a production usage, and that's where the consumption that Tom talked about kicks in. That's $0.55 per virtual CPU hour, and as they expand the deployment, They get more value out of it, and we get more consumption revenue from it. That's the model that we're basically driving in the market. With this broad interest in AI, let's see what it's doing to basically the metrics that we look at on a day in, day out, and weekly basis.
First is the pipeline, and the pipeline is basically doubled in qualified opportunities from the prior year. We went from roughly just under 300 qualified opportunities to over 600 qualified opportunities at the beginning of this fiscal year. Sales cycles are shortening. Six, seven years ago, they were over one year in discussion about the cloud, about IoT, about AI, et cetera, and now everybody's basically on board and wanna deploy very quickly. We're down to under four months average sales cycles here. Pilots, significant interest in engaging in pilots and doing them. This is quarter to date. Roughly halfway through this quarter as compared to halfway through last quarter, you'll see there's triple the number of pilots that we've actually closed relative to last quarter.
Number of deals have also accelerated. This is a comparison to, again, midway through the quarter a year ago for first quarter and now, we've basically seen 60% increase in the number of deals that we've-.
Yeah, it's quarter to date, so at this point in the quarter, roughly halfway through, a year ago, we did 10 deals, and this year, this quarter, excuse me, we've done 16 to date. Where are we seeing these pilots? There's very strong interest in the federal and aerospace defense, so that represents roughly 37% of the pilots that we closed in the last quarter. Manufacturing is also strong, with 15%-16%, and then high technology, oil and gas are just over 10%, and a smattering of interest from other industries, representing 5% each. That's where we're seeing pilots. Let me talk about specific pilots that we're doing, and then also conversions and expansions that we have underway.
Last quarter, we did 19 pilots through the quarter, and you can see each pilot has a specific focus area. For example, if you pick Owens-Illinois, they were focused on reliability of their facilities. If you look at Nucor, there's also a new logo that was production schedule optimization. That was kind of where they wanted to start with us. This quarter, so far, we've done 11 pilots, and we're on track to exceed last quarter's number. You can see we have now also closed a deal with Nucor for generative AI. Last quarter, they started with production schedule optimization. This quarter, they're doing gen AI with us. Let me talk about conversions and expansions for a moment.
Last quarter, we announced renewals at Con Edison, renewals at NYPA, renewals and expansions at the US Air Force with RSO, which I'll talk about in a moment, and then also pilot conversions. I think this is, this is basically drilling into a pilot and how it converts, which is another metric that we're tracking and you all are tracking. If you take an example of an industrial manufacturer, they wanted to basically take advantage of all of our applications, but they wanted to start with one. The first one we did was Reliability, and this was to improve the reliability of certain equipment in their facilities.
We picked one facility and demonstrated that through applying AI to the data from the equipment, from the maintenance systems, from the process systems, we could actually improve the runtime for that equipment by 20%. Pretty significant. What that means is that the asset utilization for that facility goes up by 1%-2%, which is worth $10s of millions to this organization. That's a pilot that we did in four months and then converted. In fact, let me share with you the timeline for this particular customer. We met with the CIO and their team, June of last year. Three months later, we had signed the pilot, we went back and forth as to where we should start, we ended up with the reliability case that I shared with you.
Within four months, we had completed, concluded the pilot and basically demonstrated to them the 1%-2% asset utilization improvement, and then we converted that to a production deal in another couple of months. This is a very typical sales cycle, starting with the pilot delivery of a production application, and then essentially the conversion to production. One more example is a large food processor, and here we're applying AI to improve the demand forecast. If they can figure out what their customers need, they can basically manufacture the right stuff and actually distribute it to the right locations. For this food manufacturer, we improved the forecast accuracy by 12 percentage points. That's massive.
It means they basically can reduce the amount of waste, food waste, that they have pretty significantly, now that's converted and expanding across all of their SKUs or their products, if you will. I'll conclude with the work that we've been doing with the US Air Force and the Rapid Sustainment Office. Again, this started with one aircraft platform, which was the AWACS. They gave us a second one, which was a C-5 Galaxy, from there, we now are designated as the AI predictive maintenance system of record for not just aircraft, but for everything in the US Air Force. It's a pretty big accomplishment.
If you look at the expansion, we cover 16 aircraft platforms in 16 months, and this is pretty remarkable in terms of the speed at which we accelerated from a couple of aircraft platforms across 16. The economic value here is measured in $ billions. This is not insignificant in terms of the readiness, the throw weight associated with the Air Force, as well as reduction in maintenance costs that they can get out of this. With that, in summary, we're seeing increased sales activity from all the interest in AI in the marketplace. Sales cycles are shortening, and basically, the pipeline is growing, and I would say that our prospects are very encouraging. I'll conclude there.
Really excited to welcome Honorio Padrón to join us. Honorio is Chief Technology Officer over at Pantaleon, and I'll let him tell the story of the pilots that we're engaged in. Thank you for joining us, Padrón.
Thank you.
Mr. Padrón. Thank you. There you go.
I'll take one of those just in case I need it.
There you go.
It's vodka, right?
Yes.
Thank you for inviting me, Tom and Ed and the team, and for trusting me up here with such an important audience. I'm gonna tell you very quickly about Pantaleon. You probably haven't heard of us unless you looked at it recently, just to give you context of where we are and what we're trying to do. Pantaleon is a sugar company. We're a family company, 173 years old, in headquarters in Guatemala City, and with operations in some other countries, and I'll talk about that. One of the key things here, which is a message that I will develop a little bit more, is that we want to be in the top 10 sugar companies in the world.
The only way you can do that is not only with great talent, but with great partners. I'll tell you why I think this is a great partner for many reasons. The family owns several businesses. Today, we are the number one producer in Central America, and we're about 16 in the world. We want to climb to 10. We do about 1.2 million tons of sugar a year, but we also. This is a very interesting business, and by the way, tell you more about me, but this is the first time that I've been in the agricultural business. This is fascinating, also very complicated. Every sugarcane that gets to the mill is different because of all the environmental conditions, and how it grows, and fertilization, and the rain, and all of that.
This industry is also extremely complicated and very sustainable for talking about today's world with all of that. We have no waste. We use our bagasse to generate electricity, we generate alcohol, we generate molasses, obviously, sugar is our main product. Our objective, and what C3 is helping us with, and what I'm gonna talk about, is how do you maximize the best sugarcane to the mill, and once it gets to the mill, how do you maximize the extraction of the sugar? Because even though we use everything else, it's not at the same price point. We're complicated. We are in five countries. We have a trading office in the United States, in Miami.
We have one in Chile, and then we have operations in Guatemala, one mill, one mill in Nicaragua, difficult to do business there, and very difficult to do business in Mexico, where we have two mills. Mexico, I'll give you these details so you understand how difficult it is. In Mexico, you cannot own your own sugarcane. You've got to buy it from small producers, and the government tells you what you're going to pay those producers. Imagine. We've got to pay them in advance, but you never want to get ahead of them. Perhaps AI can help there, too. That was just to give you a little bit of a background to show you that we are vertically integrated in some places, but not in others.
It's a very complicated business, and we need to be, digital transformation, AI, all of those things needed to be in the picture. Let's talk about, a little bit about me now and why I'm here, and what are we doing at Pantaleon. I know this team. This is my second time working with this team. Worked with them at the beginning of Siebel Systems, and had a great experience with all of the aspects of working with a partner like this. When it came to making a decision here, this was also a component. I have had, six, seven CIO roles, been around. I'm one of those CIOs that was hired to change stuff and move on to the next one. I was actually retired from being a CIO. I was consulting.
I did 15 years of consulting, and I was consulting in Latin America when Pantaleon twisted my arm to come in and do a total digital transformation. We went at it back in 2019, with changing everything end-to-end. I was in Disney World. This was a dream, end-to-end, from buying toilet paper to processing the sugarcane. We selected an SAP basic platform, and then got us to the situation where we now have a decent foundation to be able to move into the next level of evolution in the company, where we begin to apply AI. You know, AI has been around for a long time. We all know it. I will tell you a little bit about my perspective in terms of being a head of IT.
I will tell you that I learned this, I have a lot of scars on my back because back in, I'm going to date myself back in the 80s and 90s, okay, we were all out there with our ego in our hands, trying to think that we were developers as head of ITs. You may remember when ERPs were not really, or you may not remember, 'cause most of you are young, but you can read about it. Back in those days, there was no SAP ERP, Oracle ERP. We were all putting our own stuff together, not very successfully. Guess what's happening today? Most of the CIOs, CTOs out there think that they can do that, and they're grabbing tools from everybody and trying to put it together, and not doing very well.
Companies like Booz Allen, that recognize that there's a platform out there, are jumping ahead. There are others that are perhaps making a lot of money, because I was at a, at a conference last week with a bunch of CIOs, and there was a telecommunications company there that has been working with one of the major provider tools for two years now, and they're not getting there. IT departments should be applied technology. They're not into development. That's what people like Tom and his team specialize on, and they've been doing this for years. As a CIO, I'm very, very keen to, as a CTO, to buy applied technology. I went out there and I looked, and Tom is right. There's not another platform out there that's already integrated with all of the services.
One of your guys made a, in one of the visits that I made to their corporate headquarters, made an analogy and said, "Everybody thinks they're a general contractor, and they go to Home Depot, and they buy the wood, and they buy the tools, and they're building a house. A year and a half later, their family is still homeless." Okay. What happens here is Tom's model up here illustrates it, is he's got the house prefab, but you can change the roof, you can add another room, you can do multi-colors. That's what IT departments should be on.
I think it's just a matter of time before the knee of the curve, and I think we're already there, and folks like me realize that don't be messing around with all those tools and trying to do it like the old ERP folks that didn't succeed. Go jump on a platform that accelerates your success in terms of what you're trying to do with your company. Finished the digital transformation. Basically, we got the last part of it in October, coming up now in Mexico, and we selected C3 AI to begin to do the kind of work that we need to do now to be more competitive. In our case, it's not just being competitive, it's surviving, because our margins are so small. We're in a controlled commodity environment. We can't make mistakes, big mistakes.
We gotta be good at what we do, or we won't be around. The company's been good in a market for 173 years, that's not gonna continue unless we do better. The aspects of what Tom was talking about is, you know, the integrated open architecture. Can use any tool, really. When we brought C3 and some of our data scientists were saying, those of you that know the language, "Oh, but we're using Python." What? Keep using it. It fits into the open architecture. One of the big aspects, or one of the biggest challenges that we had was the integration of the data.
For every model that we were going to do the old way, you had to do a data extraction, 'cause I can't let them touch the live data, 'cause they may bring the system down. The way that C3 works, without getting into a lot of details, that problem goes away. Goes away because you set pipelines, and then you bring the data, and all the calculations are being done on their side, not on your side. You'll never bring the system down. Things like that we looked at in detail, made us very excited about the selection of C3. You know, this one, this one here is one that I don't think gets enough play. There are not enough people out there to take care of all the AI demand. There're not gonna be.
With this platform and the way that C3 works, which I have it on the, on the next page, you have the need for smaller teams. You don't need 10 data scientists. The model that they have is one that, it's not like a systems integrator where they want to move in with you. They want to teach you and get out, and you do it with your people. Everything that you're doing is reusable. Like that reference data model, you use it every time. You want to add three more data? You add it, put them in the relationship, but everything you did before with the prior 20 models is reusable. It really reduces the need for talent. Like I said, that one doesn't get enough play. What are we doing today? Very exciting.
We started with harvesting planning. What happens? You do a harvesting plan at the beginning of the harvesting season, which for us, it runs November through May, 7 by 24. You start, and until you run out of sugarcane and the rain lets you don't stop. Again, the key here is getting the best sugarcane to the mill. You set your plan, but stuff happens. With the optimizer that we're using here, because one of the key things is that the software that comes as a part of the kit that you can select, the 40 different options, they're not just predictive-type activities, so we're doing optimization as well on top of being predictive, which most of the models that are being done out there do not do that, again, as you put it together with your kits.
We wanna know two weeks out, where do we go harvest? We have a plan that says you're gonna go here, here, and for instance, in the Guatemala mill, we have seven crews at the time. Where do you send those crews tomorrow? You have criminal burns, something probably strange to you, but our fields get burned all the time criminally, or it rains too much, or you have maladies. All of those conditions are going into the model, and then you know where to go next. We also have, in the second one that we're looking at, is what we call undetermined losses in the mill. This is stuff that happens, and interestingly enough, for every batch of sugarcane that comes into the mill, you adjust your machinery.
For every one of them, we take samples, there's a laboratory testing process involved, it's very complicated, and we got to get that measurement to the floor really quick, so that they can adjust the machinery. Imagine doing all of that from paper, to manual, to now all automated, to now putting intelligence into it. It's gonna be a fantastic closed-loop, integrated process that is gonna allow us to minimize what we call undetermined losses. Most exciting at all, of all, and we have a little note here that says, "Pending confirmation." Well, it got confirmed while I was sitting there. We have now two Gen AI projects starting, the first one, I'm gonna throw a number out there. It's gonna be $3 million-$5 million a year benefit.
What we have today is a number of service contracts through all of our mills that we really don't know much about. That is one area that has not been integrated. We're gonna put the Gen AI activities on top to be able to analyze those contracts, optimize them. You know, we know that. We're coming from, again, from being acquisitions, from being disintegrated. We may have, with one vendor that fixes air conditioning, five contracts in one mill, you know? All of the things, you don't have enough people to really understand that, analyze it, and optimize it. Then the other one is one that's very key to us because during forecasting and contracts, we sell 80% of our production before the season starts.
We also sell everybody else's sugar, but our own production, 80% of it is under contract before you cut the first sugarcane. How do you price that? Today, there's a spreadsheet that's about 50 columns by 150 deep, that one person does, and I've been wanting to put that person in a cell with life insurance, because if we lose that person, we're in trouble. We're about to fix that with Gen AI, 'cause a lot of documentation that comes in from the industry and what the, you know, El Niño, La Niña, all of the weather components, all of that today, it's just not humanly possible to do that in the most effective way. Four, and we're looking at more possibilities, and I know they're endless.
One of the things that I don't remember if you had it with the in your statistics, is that you may take 3.7 months to sell your first pilot, but your second, third. The issue that we're having right now is prioritization, more than anything else. I think I'm running out of time. I'm gonna go through fast here. You know, we're in process doing a lot of this stuff. I would tell you, the experience that I had with this team is being repeated in terms of, you know, the trust that you can have, the confidence, and the quality of the work.
One of the things that the team has brought, when we decided to do business with C3, we didn't just partner with a platform, we partnered with an excellent team. You know, we're amazed at the number of people that have had to apply for somebody to be selected. They have, what, last year was, like, 92,000 people for 300 positions, and we see that. One of the added benefits that we're getting from the staff is that they're not coming in to automate the processes. They're very, very smart in learning your process and suggesting how to fix it with the technology by re-engineering it, not just a copy-paste type of activity. That has been an added benefit that we sort of knew.
that it was coming as we interview the folks and we saw the team that they were gonna put on this particular project. It has materialized. Our Head agricultural guy said the other day. He's a guy that I have a lot of healthy arguments with because they always want to go do their own thing, and they always come up with their own technology, and they want to be standalone. He said the other day, and if anybody doubts it, I'll get him on the phone to tell you, that he wouldn't be doing the project, the first pilot, the harvesting optimization, with anybody else other than C3 . He was out there looking at all kinds of companies to bring me, 'cause...
Typically, what they do is they go over there, they get the technology and say, "I want you to use this," but it doesn't fit. That was a cycle. In this situation, he said, "I will not do this with anybody else other than C3," and that's coming from the agricultural guy. Trying to expand, looking at a strategic plan because I can't do it any other way. I'm sitting down the teams and say, "Hold on, you know, we have a tool, we have a partner, we have a platform, we have ideas. We need to sit down and figure out an AI strategic plan," which most companies are not doing. Again, they're going out there, and, "Let's go use this tool.
Let's go do this over here." I had one executive that had already worked out a training plan with HR, they were going to tomorrow release ChatGPT to the company. I said: "What are you gonna do with that? How are you gonna use it? Do you know that it's got security issues? What platform is it gonna be on? Are you gonna allow them to touch company data with that?" A lot of companies are doing that. They're not doing their strategic plan, they're not looking at, "Can I use a platform that's already put together, instead of me trying to, you know, buy a Lego thing and assemble it and not get there effectively?" Thank you very much. You're next.
Honorio, that is. On the mic? Hello, testing. We're good? All right. Honorio, that is a really tough one to follow. I mean, two pilots during your presentation, I'm not sure if I can top that, let's give it a go. Good afternoon, everyone. My name is Merel Witteveen. I lead our alliances organization at C3. I'm very excited to be here. Prior to joining C3, I was at McKinsey in the Amsterdam offices, I came to the Bay Area for Stanford Graduate School of Business, fell in love and met C3, the rest is history. Little fun fact, in case some of you were googling, I actually used to be a professional athlete, I went to the Olympics in 2008 and won a medal there.
It's not what we're going to talk about today, but it's probably just as exciting, so it makes you feel how I feel about my work here at C3. All right, let's do this. At C3, we have a global partner ecosystem. You can slice and dice that partner ecosystem in many, many different ways. You can draw a big circle and then add some system integrators on the outside. You can create a cube or a box. I think what is most relevant for you today is actually to sort of categorize it into our go-to-market partners and into a whole set of additional partners. I can speak for hours about the second category, but actually, we're gonna focus today on the first category. When we go to market with our partners, we always put the customer central.
We really start with the customer or the prospect, and we say: What will actually benefit the customer? We try to assign the best cloud partner to this customer, either go to market together with them or be introduced to them, or actually introduce them in some cases. Depending on the customer, we introduce a system integrator as well. Now, you may ask, you know, "What's in it for the customer?" Well, for the customer, they actually have to deal with a team that is fully aligned. There's no daylight between the partner and C3. What's in it for the partner? Well, the partners, our cloud partners, when they deploy an out-of-the-box C3 application at a customer, they see consumption almost instantly, instead of, you know, having to build these solutions over time and sort of slowly building up that consumption.
There's a lot of incentive for cloud partners. Now, of course, for system integrators, they increased our footprint, and for both of them, it's truly a unique product offering. Let's go next. I'm gonna highlight a few different examples of our partners today, and I'm actually gonna start with our Google partnership. Our partnership with Google was signed in 2021, and it truly is unique because it has sponsorship from the top level. Our CEO, Tom Siebel, and Thomas Kurian, the CEO of Google Cloud, are personally very involved in this partnership, and they really make sure that it runs successfully. Google has made a significant investment in C3 and they have a dedicated co-sell team that works with our sales team to go to market jointly to our joint customers.
The partnership is buzzing, it's humming, it is very strong, it's going fast, and actually, in the last year, we've increased our pipeline, our joint pipeline, 4 x. It's a huge partnership. An example of how we would partner with Google could be, for instance, our generative AI solution, so that Honorio was just referencing to. What you could do is you deployed a C3 AI, generative AI, on Google Cloud, and you can leverage several additional Google services. In this case, that could be your large language model, the PaLM 2 model from Google. We leveraged their services, so it's very clear we're not a competitor in this case, but we're truly complementary to each other. To show sort of the success of this partnership, the Google team agrees with that as well.
Last year, we actually won the Partner of the Year award in AI and ML. The applications are in for this year, and I'm very hopeful that we're looking good for this year as well. I actually wanna highlight, Honorio, in your conversation as well, is that our Google team was very helpful in this deal as well. This is a little bit of an exception, though, because C3 introduced Google to Pantaleon, and that is truly sort of, I think, what shows the strength of the partnership. It's mutually beneficial. I'd say usually C3 gets a lot of benefit from leveraging the Google customer base. Let's move on to the next one. There we go. AWS.
AWS has been a long partner of C3, so almost since the inception, you know, Tom was referring to when he founded C3.ai. We have strengthened or sort of expanded our partnership with them back in December of last year, of 2022. You know, the AWS team came to us and said: "We see you do all these successful things with other partners. We want in. We want to go to market jointly with you. We want to co-sell." They made a huge investment in us, in, for instance, our C3 Law Enforcement application. They invested in a demo, and that now leverages natively, Amazon Rekognition, OpenSearch, and several other AWS tools.
Again, truly showing that, you know, you deploy it on the AWS infrastructure, you leverage these tools, but it's not competitive, and it truly shows the unique product offerings. Currently about over 50% of our customer base is deployed on AWS, and we have increased our joint pipeline by 24% in the last quarter alone. That recommitment from both parties really increased some additional go-to-market work here. One of my favorite examples of AWS as a working relationship here is our work with Koch Industries, one of the largest private companies in the U.S., where, you know, Koch Industries has selected AWS as the strategic cloud partner and C3 as their AI enterprise platform of choice.
The two of us actually work really well together. We started out in Georgia-Pacific for reliability for paper machines, with one successful use case, and we're deploying that now in many, many other business units. The beauty of this is that AWS sees the value in the way C3 deploys these pilots and then, you know, goes across business units to scale up. They're actually investing in quite a few of these pilots, just to truly show the commitment there. All C3 AI products are available through purchase on the marketplace by purchasing either the C3 AI pilot or our Generative AI product offering. What's in it for the customer here is that it truly simplifies the procurement process.
You know, going from, you know, this, these lengthy cycles that Ed spoke about, purchasing through the marketplace simplifies a lot of the procurement. What it also does for the customer is that it allows them to draw down their commits. In case the customer has a large commitment with Google Cloud, with AWS, or with Microsoft, they can purchase our software on their marketplace and then draw down on that commit. It's very beneficial for the customer. My last example, certainly not my least example, is a very new partnership at C3. Tom alluded to this already, and we're very excited about this new partnership with Booz Allen. It is, you know, we signed this partnership back in December, I believe, Graham, if I'm correct? Yes, in December.
Since then, we have already closed 18 deals together, so this partnership is speeding up very, very quickly. Booz Allen has also made a big investment in training a lot of their people on the C3 AI Platform. So what that means is that we're going to be co-delivering a lot of projects in mostly currently in the defense space. An example here is the work that we do for the Department of Defense Chief Digital Artificial Intelligence Office, or CDAO. What we're doing here is we're focusing on a set of use cases, C3 AI Contested Logistics, C3 AI Commander's Dashboard, Presidential Drawdown, RAVEN, RSOC CBM+, and all of these are deployed on the Department of Defense Advana platform.
In these applications, these use cases, we're deploying them in these agencies here on the right-hand side, so at the US Air Force, at SOCOM, at TRANSCOM, DLA, the Joint Staff, J4, and OSD. I'm sure Graham is the presenter right after me, and he's gonna go into a lot more detail of these, so I'll give him a little bit of a nudge here. What's next for the partner ecosystem at C3 AI? Today, over 60% of our pipeline is co-developed with our partners. What does that mean?
Well, in many cases, partners come with new prospects or new customers to us, and also the other way around, and then we, you know, keep track of which partner is on which account, and we continue to develop and truly go in and sell together. The, you know, the alliance ecosystem at C3 is very strong. I am very excited to see what the future brings or bring the future, as you may, since I'm very involved in this. Thank you. I would love to introduce Graham Evans, Vice President at Booz Allen.
Thank you.
Thank you. Why don't you sit down?
I got it. Thanks. Tom, you made a comment about my photo in your intro. You know, it's true, that photo was taken before I met you. And I'd love if we have a break, maybe you can give me some tips on how to age gracefully in the tech industry. That'd be great. All right, thank you very much for having me, C3 I'm really excited to talk about our partnership. Graham Evans, Vice President at Booz Allen. I lead a lot of our data platforms and enterprise analytics work we have with our federal customers. First, I want to talk a little bit about what Booz Allen does. We were formed over 100 years ago.
We are the base of our capability was formulated out of the commercial consulting org industry. Now we are primarily focused on the federal industry, so defense, civil, and intelligence industries. We have over 30,000 employees, and we mainly focus on consulting engagements related to analytics, digital, cyber, and engineering. I'm really excited, and I'm super proud that almost a third of our organization is veterans or folks that are currently active in the military, which gives us great connection back into the mission of those customers that we serve. It's truly inspiring to be working alongside the war fighters, and veterans, and civil servants that we are serving every day, as we deliver our capabilities to some of the nation's most challenging problems.
As Tom mentioned, the market for enterprise AI applications is substantially larger and growing at a much greater pace than anybody, I think, ever expected. For our federal customers, this creates both opportunities and some threats. From an opportunity perspective, this might take the form of creating some type of economic benefit or financial efficiency within a government agency, like mitigating fraud, waste, and abuse, or, as we learned about earlier, using predictive maintenance to identify cost savings for an organization. Now, there's also threats. Obviously, our adversaries are leveraging AI as a instrument of warfare, and we have to be prepared to be able to defend against that.
You know, we're really excited to be able to be part of that solution, to create AI capabilities to improve the decision making within a defense organization from the battlefield, to the boardroom. Of course, Booz Allen is also an outspoken, leader in the areas of the ethical use of AI, in the federal space, where it's become a hot button, especially with the ubiquitous use of AI through autonomy, capabilities as well. We are positioned for sustained, diversified revenue growth, given our central role in critical AI pathfinder and enablement programs, which C3 is a partner of ours on a number of them. We're also one of the largest providers of AI solutions in the federal space.
We have over 150 projects within the federal government, where we're actively producing AI capabilities to solve, like I said, some of the most challenging problems that our customers have. This, this up here is sort of our comprehensive approach towards delivering AI capabilities to some of our customers, from strategy through operations. This obviously involves delivering some type of IT capability, and that's not without its challenges, as most of you probably already know. I'd like to kind of illustrate that with a little scenario with one of our projects. I can't really talk a whole lot about it, but from what I can share, we have one team, which is basically a digital SWAT team, and they get deployed into challenging government organizations, with a specific challenge to say, "Hey, go...
I've got this issue, and I need you to go collect data. I need you to solve this problem using AI, ML, or whatever makes sense, because I can't figure it out." You know, so looking back over the last couple of years, the team was engaged during the COVID-19 pandemic to integrate all of the different data sources that were coming in from the states, from the different services, to give commanders a better sense of what's actually going on, how do we support our war fighters, how do we make better decisions, how do we continue to fight? In that day, we built a data platform.
We spent many hours, and days, and months, and weeks trying to create an integrated AI, ML pipeline to solve some of the challenges around predicting when the pandemic would cause a particular outage in a supply chain or something like that. Fast-forward, we took that same lesson learned with that digital SWAT team, and we applied it to the Afghan evacuation when the Taliban was coming in. We had a team deployed that was helping look at how we could leverage data and AI to essentially bring our allies and partners from Afghanistan to locations around the world that had the appropriate medical facilities and housing and things like that.
Then finally, in the most recent update, we had that team deployed looking at, when Russia invaded Ukraine, how could we ensure that the supply chain for our allies and partners was not interrupted, and we were giving the right types of capabilities to our allies and partners? The sort of the challenges that we faced during that entire time was the same. We had these bespoke solutions that we created every single time we had an engagement, and again, we spent all of our time building out the infrastructure. We spent all of our time orchestrating data operations. What we identified is that in order for us to grow, in order for us to continue to repeat that for each of our customers, we needed a common foundation.
We need a common tool set that allowed us to focus more of our time building out the more interesting insights that are gonna allow our war fighters and our partners and our veterans to get the capability that they need, as opposed to spending all of our time on the infrastructure. That's really where C3 came in, and I think Honorio talked about this. With an organization as large as Booz Allen, we can't have custom software solutions because it's gonna create too much time in training our employees to get them up to speed on how to use different capabilities, and we're not gonna be able to deliver capability at the speed of relevance that our customers demand.
Finally, in the space that we deal with, there's a number of providers that wanna do business with federal government, vendors and whatnot. We're looking for differentiated partnership. We're looking for organizations that can help us deliver our capabilities faster, but also help us deliver something that's unique to our customers. Really, that's where C3 has come in. C3, from day one, has been helping us with this challenge across our federal government. There's really two main reasons why we selected C3 as our partner in a number of those customer areas that Merel was talking about. You know, one is technology. It's world-class. The other is the actual partnership itself.
From a technology perspective, C3 provides that turnkey AI/ML solution that I was talking about, that's such a challenge for our customers. We can rapidly deliver complex decision support, capabilities, look at the next best action. We can predict what the consequences of those actions are and the results, it fits right within our customer's workflow. It's not something that's like a static dashboard that they're used to seeing. It's something new and unique that differentiates us in that space. Also, the open architecture that Tom was talking about, where you can plug in different tools. Our customers already have investments that they've made in things like Databricks or other data visualization tools that we can easily plug in as part of that AI/ML architecture.
Most importantly, the relationship that we've had over the last six months has really been a true different experience for me from a partnership perspective. From the first conversation that we had with the C3 leadership, it was very clear that they were an organization that was focused on the economic outcome of their customers, of the solutions they were providing, as opposed to just selling more licenses. While that's important, it's really about aligning to that ethos that your customer's mission is the most important, and we really saw that with C3. The partnership also gives us an insight into what's going on with the commercial industry and see trends before they can actually be part of the federal government.
The technology life cycle in the federal space, as some of you may know, is a little bit behind sometimes, in what we see in the commercial industry. What's been really interesting is to see solutions that have been implemented in the commercial space around supply chain or readiness, and we can easily apply them to our customers in the defense and intel space. As Merel mentioned, we've had a very successful six months over this first initial part of our engagement. As Merel mentioned, we have 18 closed deals. We have about 19 in our pipeline that we're looking at very closely, that are super exciting.
One of the things that we've been really focused on is trying to train up an army of developers, so it's not just a one application at a time kind of deal. We have folks that are ready, that are making the C3 application, they're making the C3 platform a part of our regular delivery toolkit as we go and deploy new capabilities to our customers. The first, I'd like to just articulate one of these successful programs that we have using C3. I can't really talk about who the customer is, but let's just say in general, when we talk about delivering these capabilities to our customers in the federal space, senior executives in DoD, we're talking the Deputy Secretary of Defense, we're talking combatant command commanders, they expect smart applications today.
A static dashboard they might be used to is okay, but it's not really the expectation anymore. Leaders wanna be able to look at scenarios. They wanna be able to predict outcomes. They wanna create alternative plans on the fly using the information that they have available to them. That's really what our customer asked us to be able to do. Where we've seen some early success in some of our deliveries together have been in really two main areas. One is what we're calling Commander's Insights. Essentially what that means is you've got those senior executives like I was talking about. They want visibility into their vast enterprise.
I mean, the Secretary of Defense and these combatant command commanders have, like, unimaginably complex businesses that they must run from a people perspective, acquisition perspective, readiness perspective, very much like a commercial business, but with the aspect of an impending war, if you will, that they have to be ready for. They need information at the speed of relevance, and they need it to be dynamic. They don't want an army of PowerPoint rangers roaming around the Pentagon to be able to give them week-old data. They want that at their fingertips, at their desktop, to be able to make decisions, billion-dollar decisions, life or death decisions, on the fly. We've had some serious success with building out capabilities rapidly with the C3 platform in that area.
Another area that's emerging, it's extremely important, given some of the external conflicts that we're seeing in the world, is around contested logistics. There's a phrase in military circles that amateurs talk about tactics and strategy, and professionals talk about logistics. In the circles that we operate within, in the Pentagon, some of our joint customers, they're all about ensuring that their logistics, their supply chain pipelines, their inventory optimization, their predictive maintenance, that sort of thing is all-encompassing within this contested logistics swim lane. We're really excited about some of the discussions we've been having recently with in those areas.
Again, I mentioned this before, but it's really been interesting to see the analogous use of some of the existing applications that C3 has put a lot of investment in around the commercial application of logistics or supply chain optimization that we can then very easily extend into the federal space. One very interesting use case is a case study we did for one combatant command organization. One of these customers asked us to look at how we could consolidate some of the myriad data feeds that would go into briefing the commander about the health of their organization and their warfighting capabilities. We looked at C3 to help us build a smart application to replace some of the existing static dashboards that were currently being used.
Again, whether it was PowerPoint or some type of data visualization capability, it was days-old data, weeks-old data, that wasn't giving the commander what they wanted. The application that we built helped create additional risk scenarios they wouldn't have been able to get through a PowerPoint presentation, for example. It used AI/ ML capabilities to look at the vast amounts of data that we had access to and create alerting and other types of information that would be available to that commander about the readiness of their force. The result of that was a very well-received pilot by the command. This activity, as we talked about before, was done in a 12-week execution, a pilot execution.
As Tom mentioned, it was on spec, on budget, and exactly what the customer was looking for. The result of that, for us, for our partnership, was a continued extended contract to build this for the entire command in a longer-term engagement. All right, looking ahead, what's in the future? A couple of things that I'm pretty excited about in leveraging the C3 platform for, number one is leveraging the progress that we've made so far in some of the logistics organizations that we've been talking with. Merel mentioned the Joint Staff J4, for example, and DLA.
Getting inside the decision cycle when we're talking about contested logistics, being able to link planning and the actual delivery of capability is something that's a huge challenge across the enormity of the DoD enterprise. That's something that our teams have been working on very closely together, and we're really excited about what we can do in those areas. There's lots and lots of opportunity ahead for that. The other thing that's really interesting, too, is I again go back to this concept of reuse. If we can build a Commander's Insight Dashboard for one combatant commander or one service, we can build it for the entire organization. Each organization needs a different view of their version of that business. The same thing goes with contested logistics.
If I'm a joint combatant command, and I care about this portion of the world around logistics, you know, the Navy might be interested in some portion of it, the Army, et cetera. We talk. There's probably gonna be a presentation today about predictive maintenance. If you build a predictive model around airframe predictive maintenance, there's probably not too far of a venture to go and look at it for surface ships or ground vehicles. We're really excited about how we can potentially reuse and create a flywheel with growth on the C3 AI Platform. In addition to looking at civil and intel use cases, which are also becoming more interesting to us as we start to grow the awareness of the C3 Platform across the Booz Allen enterprise.
Finally, as Tom mentioned, we've got a common customer in the DoD space who likes to talk about their vision for the Google of DoD, for the search for DoD. The example here would be the Chairman of the Joint Chiefs is sitting down at his desk, and he wants to know, how many tanks are service ready to go fight in this next conflict, or how many E5s do I have in INDOPACOM, that know Mandarin? You know, today, that information, that, like, seeking that information out would take weeks and weeks of data calls across the entire department.
With the application of something like C3's generative AI capabilities, all the data that we've collected in these data platforms that we produce for our DoD customers can be leveraged to quickly answer those types of questions in seconds. Extremely exciting. The future is extremely bright for that particular capability. In summary, we at Booz Allen are super excited about integrating C3 into our delivery toolkit for some of our federal customers, and I look forward to working with the team going forward. Thanks very much.
Thank you. We are after the break.
I do remember my first time stepping on a refinery, going, "Wow, this is huge!
The dimensions of it are just mind-blowing.
It has thousands of miles of pipeline.
Up to 10,000 valves and 1 million instruments.
All of those things have sensors embedded in them. These vast data stores create a huge opportunity for digital transformation.
That can really lead to some significant new solutions.
Digital transformation is really about two things. It's about recognizing that the amount of data we have around our existing business gives us a big opportunity to transform the way in which we operate. Digital transformation is also about finding ways in which we can operate in the world that's coming.
Companies in the energy industry today are going through a massive transition. If these companies can't manage the energy transition, which is really about the world demanding more energy, but more energy from sustainable sources, they're gonna have challenges operating, both from a social license to operate, but also from just the basic economics of doing business.
We want to have an autonomous 3D racing game. We play at home on this.
Shell has a really rich history in digital technologies, and data science in particular. We developed a statistics group in the 1970s, and so when I started work in this space in 2013, I was building on a history that we already had. We saw that things were changing, and in particular, what was changing was the ability to take large data sets, to store them in a cloud-based environment, and to use cloud-based computation to accelerate the development of new software.
In 2015, I was really blessed to be given a chance to lead the digital transformation team. Preferably, we want to do this part-
Elisa was a phenomenal visionary. She understood where the technology was going, and she understood the transformation that that could lead to for the company.
I think I've been working with Dan for about 10 years. Yeah. When he came in, he was this young punk with his hair standing up. Even though today he still look like a punk, yeah?
We've got that in there, and then.
What I really treasure about Dan is his passion in using technology to solve problems.
In 2014, we had a big incident where we were lucky that no people got injured. Our control valve failed.
The plug came loose from the stem, and it started oscillating in the process, and at one point, it actually broke the body of the valve.
We had to repair the plant. It cost us three weeks of in-plant downtime.
We started thinking about how we could use the data that we have already collected to prevent future failures.
I think it is.
Yeah, well, I see some drips there from the, left side-
We contacted the Shell data science department. They had much more experience in this.
What was fortuitous about the work with Pernis was that it came at a time where we'd been working in this space for quite some time, in Shearwater in the U.K. What we'd been looking at was the opportunity to effectively take all of the telemetry data that we had, and to use that to train data-driven machine learning models to detect anomalies that humans otherwise wouldn't detect. We very quickly saw the opportunity to leverage this approach in Pernis.
We have years and years of data. We started with 15 valves.
We're looking at things like the temperature of that valve, the flow rate of that valve...
Pressures once every minute or once every second.
We're using that to forecast, if you will, the normal behavior of that valve, and then understand where the behavior of that valve deviates from normal conditions, and when that happens significantly enough to generate an alert.
The challenge was how to automate that without a lot of human effort. As I said, 20,000 valves in Pernis, but we have, like, 100,000 valves in Shell.
How do you scale that to run that globally?
I think what we do.
We needed a machine learning operation solution. We also needed a solution that would scale on the vast quantities of data that's generated from these assets. We were looking to say, "Well, do we build this ourselves?" I think what we realized quite quickly is we don't want to be an AI platform company. Our focus is around our understanding of the problems that our industry faces.
We spotted a company that is called C3. They were at the cutting edge of developing predictive maintenance, modeling, a lot of algorithms that we saw was appropriate for us.
Yeah, I think if you can set them up so that they are part of the onboarding process, then we ramp up over time. We were approached by Shell some years ago. We demonstrated that we could identify device failure with very, very high levels of precision.
This is being accomplished through the application of an entirely new generation of technology called predictive analytics, or AI.
Shell was interested in what C3 AI was building, and whether it would be relevant to Shell operations. The challenges they were running into were really around scaling and deploying their software and their machine learning algorithms to the entire footprint of Shell. While the mechanics of the valve were simple, the context of the valve required that each valve had to be treated as its own independent piece of equipment, which meant that you really needed at least one independent machine learning model per valve.
Shell has an unusually gifted team of data scientists. They are professionals. They understand the industry cold. The C3 AI Platform enables the Shell data scientists, using the tools they've always used before, to immediately put these applications to work at full enterprise Shell scale, which is as large an enterprise as there is in the world.
I'll write an investigation, send an email, and initiate it.
What C3 demonstrated was that their platform was capable of managing 2 million of these models. That was what excited us, because it gave us confidence that they understood the problem that we had, and that their platform was scalable enough to be able to deal with it as we put these models into production.
The types of these successes tend to be somewhat under the radar, because they are all avoided failures, right? Nothing happened, and that's a good thing.
We have repaired 83 valves, basically, that we would not have found without the analytics, and we have millions of savings on unplanned downtime.
We've now managed to deploy multiple applications into production at scale in really critical areas. We've developed, together, mechanisms to monitor over 8,000 pieces of equipment every day.
We've been able to expand the scope from just valves. There's literally hundreds of use cases.
We also now collect quite a lot of data. We collect data using robotics.
They can monitor the position of hand valves. They can detect leaks, gas leaks, but also liquid leaks. They are really helping the operators doing their daily routines.
We look at how we can turn this data into patterns.
A lot of the opportunities are in optimization.
Optimization is important. We again look at the data. We can see where is best place to put the warehouse, and how is the best way to get it out to our offshore location.
There will be hydrocarbons for certain industries. The question is, how do you produce it in a way that is responsible? How do you produce it in a way that gets to net zero emissions? How do you do it in a way that helps support the planet?
We've also developed technology to optimize performance of things like liquefied natural gas trains. A recent algorithm we deployed to one of these trains in Nigeria resulted in the equivalent of taking about 28,000 vehicles off the road. It's a great example of how AI is starting to have a big impact.
Shell does a lot to try to understand our customers. That's where AI comes in as well. They can use multiple sources of data to actually give a much, much more holistic picture of what our customer needs.
When you look at the energy system, everybody has a piece of the puzzle. The original equipment manufacturer has a piece of the puzzle. Baker Hughes stepped in with their deep understanding of how equipment operates. The operator also has a piece of the puzzle, because they understand how that equipment operates in context. Of course, when you talk about aggregating large volumes of data, you need to do that in a cloud-based environment. Microsoft stepped in there to help us to accelerate the speed of development of our solutions collectively.
We collaborate with C3, Baker Hughes, as well as Microsoft, to really look at how we can accelerate our AI journey.
Shell is a long-standing customer of Baker Hughes, naturally as Baker Hughes' relationship with C3 was formed, Shell became a natural party for us to collaborate on some new innovations on the C3 Platform.
We have, at C3 AI, a deep and committed relationship with Baker Hughes. Baker Hughes has been our partner and customer for the last several years in our bid to take our solution and our capabilities to the energy field altogether.
The Open AI Energy Initiative is a really exciting partnership between four founding members: Baker Hughes, C3 AI, Shell, and Microsoft. Really, what it aims to do is just to start to create a set of industry standards and references around how we bring AI technology to the industry.
The whole point of this effort was to take the learnings and innovation that we have been doing so far with Shell and bring the entire industry along.
I've been doing IT.
If we wanted to be really successful transforming the energy industry, we had to work with a broader ecosystem of partners. What the Open AI Energy Initiative tries to do is it tries to say, "Let's work together on a common digital infrastructure from which we can all benefit through fair value exchange, with a common set of open data standards underpinning it, which can accelerate the digital transformation of an asset or a site in the energy sector.
This was really driven by the visionaries at Shell that are by far the leaders in applying artificial intelligence to deliver cleaner, safer, more reliable energy with less environmental impact.
Nobody can go at this alone. It's a societal challenge that we're faced with. With that, we need to collaborate. It takes all of us to drive this decarbonization agenda.
I think this is an increasingly important part of how we look at digital, is how we help it enable our greenhouse gas ambitions.
Let's think about what Shell is doing. These guys have set the goal to reinvent themselves as a zero net carbon footprint company by 2050. This is ambitious.
The demand for green electricity is just growing across the globe. We couldn't do this without digital solutions.
If you look at, for example, electric vehicles, we can look at ways in which we charge those vehicles to make sure that we maximize the amount of renewables that are on the grid at any given time. We have the opportunity to optimize the grid behavior using AI, and to balance all of those new loads that are coming onto the grid.
If you take the example of the home, where solar panels are generating solar electricity during the day, but we're storing them in a battery and using them when people are home at night, that's a digital solution, so you wouldn't be able to do that without the right software solutions.
At the end of the day, energy transition is urgent. We're gonna have to transform the energy system faster than we've ever thought possible. Digital is one key lever to do that.
Companies that do not digitally transform themselves with this new generation of technologies will cease to be competitive.
My hope is in 10 years, we have an energy industry that's producing certainly an abundance of clean, safe, efficient energy, and is able to distribute it to parts of the world that frankly don't have access to it today.
I am very optimistic about the future. I believe technology is there to help humans solve many problems. Shell is here to produce energy, cleaner energy, for the people and for the planet.
To give you a scale for what a paper machine is, you could easily have north of 5,000 sensors, and that's on a low end, in one paper machine. That paper machine can, you know, produce hundreds of thousands of records of data on a daily basis that we analyze to figure out what's working, what's not, and why it's not working. You know, a paper machine, it could be the size of a football field. It's a fairly complex set of equipment that, you know, requires a lot of different processes, different subject matter expertise, to be able to produce, you know, thousands of tons of paper on a daily basis. There's a lot of debate on, is digital really valuable? Is this, you know, adding value to the bottom line?
Honestly, from my seat on the bus, not doing digital transformation, it's really not an option.
From a data scientist point of view, especially one who's working in, you know, industrial IoT, you know, where time series are generated, in real time, and they're often very large, and you need to make sense of that data, you really don't have a choice. It is a real value driver. The thing that I enjoy most, and this is, this is relevant for all manufacturers, is the ability to combine various data sources into a platform, and not only just do that, but make it happen in real time, and make it work in real time. That, to me, is the most exciting and most fulfilling part of our collaboration between Georgia-Pacific and C3 AI.
When we collaborate with C3, we have data scientists who are really good at what they're doing, and that really allows us, GP data scientists, to bounce off ideas off one another, and come up with a good strategy to tackle a huge, complicated problem.
What I would say to anyone who's on the fence or considering C3 AI is, it really is an industrial-strength data science platform. We are able, within one platform, to combine data engineering, data science, deployment, real-time scoring, visualization in one tool. The other part, which is so key to manufacturers, is the ability to combine data sources that we had only dreamed about before. To me, that led to increased model improvement, better accuracy, better uptime, eventually, on our assets. Again, the reduced unplanned events, the thing that guides the whole project.
The predictive maintenance application we've been using, it's absolutely has an impact on our bottom line. What we're able to do is empower our colleagues to spend 80% of their time addressing the problem, as opposed to looking for them. This is where artificial intelligence comes in really handy.
A lot of times, the technology solutions that we bring together for some of our customers are relatively disparate and not really well integrated, so we spend a lot of our time building interfaces to different tools. Whereas with the C3 capability, it's a full package of AI integration capabilities, from the data layer to the analytic visualization layer, and a lot of our developers love it. A lot of our data scientists love it because they can spend more time building insights than building infrastructure and pipes to create new capabilities. One of my roles is to help DoD organizations think about ways in which they can harness the data that they have available to them and generate new insights and capabilities to help them drive decisions faster.
Over the last, let's say, five to seven years, the technology journey has been quite remarkable. We started working on-prem, using relatively custom solutions, and I think AI has the promise of being able to crunch more information into that smaller decision space at a rate that is relevant to the mission. If you can come up to a decision, the right decision, but it's too late, then you can't dominate, and you can't win. I think AI has the promise to be able to help organizations and leaders do that.
One of the things that I really see the value in a partnership with Booz Allen and C3 is where Booz Allen can bring the mission, understanding the subject matter expertise around the data, around the solutions, and C3 brings this really unique technological capability, but really sharp engineers that can help us integrate that technology into the mission with our partnership. One of the things that excites me the most about the work that we do, it's what gets me up in the morning, is the opportunity on a daily basis to support our war fighters, to support our veterans. Supporting the federal government with new AI capabilities is one of the most patriotic ways in which you can support your country, and that's really the reason why I got into this business.
I think there's two things that excite me about being able to bring capability to our clients faster through technology like C3. I mean, the first is that typically in the federal government, they're slower to adopt new technology, because of security or other acquisition-type ramifications. With the C3 partnership that we have with Booz Allen, we're able to introduce the best of breed of commercial capability. Advancements in integrating AI solutions into common workflows is something that I find really exciting about bringing that to our customers who have similar problems. It's just a matter of bringing the right technology and solutions to bear.
Booz Allen's more than 100 years old. We've worked for the federal government since World War II. We were drawn into service then to help the Navy with challenging operational problems. Many of those persist today. If you look at the Department of Defense, it's one of the world's largest organizations, perhaps the largest organization in the world, with incredible data stores and so much information that flows across the department. There's so much opportunity and room for improvement there. We're seeing those all over the place, really, what power you can bring to automating projects, delivering advanced analytics, and getting to more sophisticated use cases that might be, yeah, rooted in more advanced machine learning methods.
It's really the enterprise scale and enterprise-grade platform that C3 represents, the amount of technical investment that's gone into it over the last decade plus. You just don't see that in every run-of-the-mill analytic library that you might pick up. I think many of these, while very accessible and perhaps easy to start with, fall short of what an, you know, enterprise-grade solution that the Department of Defense would require. I'd say no job's too small, and everything that we're working on right now is truly changing the Department of Defense. It's impressive. It's changing, you know, what, you know, might have been there for decades or the technical debt that exists there. The integration of platforms like C3 and what it offers is transforming the Department of Defense.
I am most excited about what we can do on that data when we have it all accessible, digitally, you know, native, and able to operate with platforms and tools like what C3 brings. It might be something as simple as automating, someone's, you know, mundane tasks that free them up to work on more advanced things and help the department go further. We're excited about those possibilities. I think we're seeing a turning point, really, where going beyond sort of one-off pilots, we need to operationalize AI at scale across the department, and we're excited to be partnering with C3 on that. The blend of the technology and the mission is really where we see ourselves operating, and it's certainly core to our business model.
To find a technology partner that is, you know, patterned the same way, is exciting and I think, will allow us to go further.
Our main focus is for the projects in the countries that I've worked with, the U.S. Census Bureau, and it has evolved in the eight and a half years that I've been there. Now that we are starting to see more of the complex problems that they have, we're trying to find better ways to solve them with more innovation. We have really good connections with our customers, Census Bureau and with the FBI, we really feel that there's some questions and business problems that they have, that our understanding of them as a business and bringing in technology partners such as C3, would really assist, to be able to deliver at that point of, impact.
Finding a needle in a haystack is actually quite simple with a big enough magnet. The partnership with C3 is really trying to identify the one needle in the stack of needles. Being able to do that where it really advances, and that's the game changer for us and that being able to drive and hone in on the data that we're looking for. We've seen it from our customer, all the different disparate systems that they have. C3's capability to be able to use multiple data sources without having to rebuild all of those connections, is a multiplier in regards to time and efficiency, to be able to do that versus having to go back and reiterate multiple times for some of these legacy systems that we'll never really use again.
C3's ability to be able to interface with a very wide array of data sets and data systems, that's very beneficial for us. For example, our team actually worked with C3's engineering side for the current law enforcement module, and so being able to help identify and relate historical information that they may have seen systematically, to be able to find where a reference of a tattoo or a visual marker that they may see later and then have missed as a human, whereas the system itself can always track all that collectively, and being able to apply it and find it, and then notify us of those ties together. That's where that one needle in the stack of the pictures of all of that helps us find that.
Turn this thing on. Testing 1, 2, 3. Are we live? We're live. Is this thing working, John? It is. Okay, let me introduce my colleague, Nikhil Krishnan. Nikhil has worked with us, I think, 11 years. He runs all data science. He came to us from McKinsey & Company, where he worked for a number of years. Before that, he was on the faculty at Columbia. He is a kind of distinguished expert in all things kind of related to data science, and kind of personally has been driving this C3 Generative AI product. Nikhil's going to talk to you about that. I think you'll find this quite interesting. Nikhil.
Thank you, Tom. Testing. Thank you, Tom, for the introduction, and I'm gonna launch right into it. Let's see. Generative AI. We've been a s Tom mentioned, we've been working on this in the AI ML world for a few years now, and then really last year got into it in great earnest in terms of a broader Generative AI product capability, and I'll talk more about this in the next few slides. As we approached it last year, we put together the best of C3 AI. Over a decade of our experience in building very large-scale AI applications have gone into this Generative AI product.
That includes our capabilities in managing big data, cloud compute, all of the purpose-built enterprise AI models, supervised learning, deep learning, unsupervised learning, NLP, reinforcement learning, all coming together with the generative AI and enterprise search capabilities. What does this mean? Let me double-click on that. There's really two things when we talk about our generative AI product suite. The first is generative AI when it comes to an attachment of this product with C3 AI applications. By this, we mean a next-generation human-computer interaction model.
Imagine a situation where a human can ask any question in a natural language interface of a C3 AI application, and they would be able to get the top result, get the sources, get the summaries, be able to chat with the system. That's what we mean by generative AI attached to our existing apps. We have 42 of them, but attached to, for example, reliability, supply network risk, et cetera, to transform the human-computer interaction model. This is great because it's great for our customers 'cause it drives additional value, it drives democratization of the applications. It gets a larger community of people to be able to use our applications and unlock value from them. It's also great for us 'cause it drives additional consumption for C3 AI.
The second opportunity that's even larger with generative AI, the second standalone capability is what we call it, is really an enterprise search capability. This is the ability to map any dataset in an organization, structured data, unstructured data, tabular data, sensor data to this generative AI product, and be able to answer questions, be able to reason on it, be able to get the right information to the right person at the right time. This is really the bigger part of what our generative AI story is, and this is what I'm going to be talking about and focusing on in this presentation. Tom covered this. I want to just double-click on this. We have really been laser-focused on the application of generative AI to the enterprise in our product capability.
What does this mean? When you look at the standard implementation of what others are doing when it comes to applying generative AI to the enterprise, they're often using an LLM either directly or using an LLM fine-tuned on top of the enterprise's datasets. That could be text, HTML code, you have the LLM, these things directly feeding into the LLM. You have a chat-like interface for humans to ask questions and interact with that language model. The problems here are numerous, Tom covered them in his talk. I'll double-click on them quickly. First, you have stochasticity. You have random responses. If I ask the same question twice, or if two people ask the same question, they might very well get different answers.
That's a function of the, of the nature of using an LLM to reason and answer questions directly on top of the data. Secondly, traceability. What is the source of a specific piece of, what is the evidence that supports a specific statement or assertion by the LLM, and how do I have transparency into that? That is absolutely not supported in this architecture. There's no enterprise access controls built into this. That is, if the CEO asks a question, and somebody in the front line or on a factory floor asks a question, it's very hard to parse out what a person is actually allowed to see and only show them the information that they're actually able or allowed to see. There's also less risk of information leakage.
These LLMs are susceptible to prompt engineering attacks, and this is actually quite problematic, and Honorio referred to this. Tom referred to the Samsung case, for example. This is seriously problematic for the enterprise. Lastly, you have this problem of hallucination. The LLM might just very well make up something in a certain situation. In contrast to this, which is what a lot of companies are experimenting with and trying out and providers are providing in the market today, the way we approach generative AI was really building on a decade-plus of our experience in applying these technologies to the largest and most complex enterprises in the world. In our architecture, we wanted to, from the ground up...
First, point number one is, you'll see on the green area, up top on the funnel on the right-hand side, we have tables, we have applications, sensor data, log files. We wanted to consider data sets that are much broader, beyond just the unstructured text-type data. We also have this separation between the knowledge model in blue and the LLM itself. What does that mean? That means we're actually using separate deep learning knowledge models to embed the enterprise's information and storing that efficiently in a vector store.
When a human asks a question through chat or through search to the LLM, the LLM in turn, the language model in turn, has to ask that question of the knowledge model, and the knowledge model, between the knowledge model and the LLM, there is a barrier or a firewall, where the access controls are applied. The LLM only receives the information that the user is actually authorized to see. We've separated the memory module in blue from the reasoning capabilities of the LLM. The LLM is not answering a question directly. The LLM is relying on its memory. The memory resides in that blue box.
One more thing I will highlight here is you see you have the chat interface, you have a search interface, but you also have an orchestration interface. This is a key part of our product, is we want to be able to orchestrate other applications. We want to be able to orchestrate purpose-built AI, ML models. We want to even be able to chain LLMs together and orchestrate other LLMs. By the way, in many cases, in our in that LLM bucket, you might have much smaller models and actually many more of those fine-tuned, smaller models all working together, you know, in a much more efficient way than some of the large LLM implementations today. The benefits of this are numerous. We have deterministic responses. You ask the same question, you get the same response. Full traceability, transparency.
I'll show you this in some of the, of the product demos. Full enterprise access controls, that we're able to leverage all of our experience in doing this. No LLM-caused leakage of proprietary information, 'cause the LLM doesn't know any proprietary information. And then lastly, no hallucination. We have the temperature of these models turned way down. That's in summary what we've done that's super unique, and I think this differentiates us, very uniquely in the enterprise space, and we have this as a production GA product available to deploy today. This builds on a decade-plus of our experience, with the C3 AI Platform. This, this slide is one that Tom walked you through earlier today. This is, all of the software engineering we've done over the last decade.
All of our applications run on this platform, including the generative AI product. The generative AI product, I want to highlight some of the capabilities in the platform that our generative AI product uses, would take you know, other software companies quite a lot of time to put together. For example, runtime management. All of the hardware profile, hardware management, GPU and CPU runtime management and orchestration. Our product takes, generative AI product, takes advantage of everything in the platform there. Vector store, we've actually extended the platform stores to include a vector store, as part of this architecture. A distributed file system, so the vector store can spill over to a distributed file system, to disk, if needed.
We have deep learning retrieval models and large language models, and on the right-hand side, even a visualization component library to actually render results in the right chart formats. All of that is part of our Generative AI offering. A summary of what we've done as a standalone Generative AI product is it really serves as a knowledge Copilot. And what I mean by that is, think about a capability where you have the C3 Generative AI product, which serves as a deep domain assistant with expertise that can orchestrate APIs, tools, and help users take action. I'll give you a few examples of what this means. First is in manufacturing, and I'll come back to this example later in this talk.
But imagine a situation, and this is a real example of a case we're doing, where you have a company that, where you have machine operators and technicians that are fairly new, and a retiring, aging, retiring workforce. These operators are operating a machinery that's, you know, hundreds of millions of dollars in economic value. They struggle to do that because the operation procedures are very complex. There's a lot of information, thousands of pages of standard operating procedures, maintenance manuals, operational manuals, recipe cards that they have to consult, tens of thousands of sensors that they have to look up, and if something goes wrong, they really don't know what to do.
They end up calling an experienced tech, saying, "Hey, can you help me out?" This is not sustainable. With the C3 Generative AI product, this same machine operator or tech, this novice operator, can now talk to, directly to our C3 Generative AI product, and our product has already read the standard operating procedures, the maintenance manuals, the operational manuals, is aware of all of the sensor data, for example, in OSIsoft PI, is aware of all of the recipe cards, and may even be aware of purpose-built AI/ML models that have already been configured. This is really the future state, where the operator can ask any question and get an answer back. Here's another example. This is an ESG domain: sustainability. Companies have data contained in dashboards, in purpose-built applications like Watershed or Sphera or C3 AI.
They have spreadsheets, they have slide presentations, sustainability reports, news feeds, all sorts of investor data. At the same time, they're not. You know, this data is all fragmented throughout the organization. At the same time, the challenge that the organization has is that, most organizations have, is that they can't answer even the basic questions: How am I doing against my CO2 reduction plans? Or what are my best-performing ESG issues or topics? Or what are my worst-performing ESG issues or topics? Or what are my worst-performing plans, or which projects are how am I doing against my budgets? Here's an example where, you know, an operator with the C3 Generative AI product, you can ask any question of the system.
For example, "How am I, you know, doing against my CO2 reduction goals?" That could be a question that I could ask. I just wanted to walk you through an example set of visualizations of what this could look like. There's my question again. I've just asked it in a very simple Google-like interface. Our system has gone back. The LLM interprets the question. The question is then sent to the retrieval models. The right embeddings are retrieved based on my access controls, and if there's a right match found, our product will even visualize this in the right visualization. In this case, it's a wedge chart that shows you how I'm tracking to my goals. It also summarizes it in text.
Here it says, you know, what does my plan look like? What do my goals look like? I can actually go directly to a specific application page or a document, or I can further chat with the system. I also have supporting data. You know, what are the relevant search results that are that from, you know, the sources across the enterprise, from C3 AI, as well as from other sources, for in this case, Tableau or Microsoft, in a ranked list of order of relevance to the end user. That's the second example.
We have filed this patent quite broadly of the capability that we've developed here, which includes the orchestration of all of the components that I talked about before. Summary: we have deterministic responses, full traceability, full enterprise access controls, no LLM-caused leakage of proprietary information, and most importantly, also no hallucination. What are a couple of examples? We've been doing work in multiple domains. One of them is in banking or in financial services. This is an example of a customer of ours that has very complex documents, financial services documents.
This is an example of a bond document that's, you know, 80, 100 pages long, it has introductions, it has appendices, it has tax treatment, has Board of Directors, and it takes an operator five, six hours to go through a single document. They have hundreds of thousands of these documents, thousands more that arrive each month. In this kind of a use case, our system is super powerful. For example, I can ask it any question, see if it plays. You know, what is this bond, for example? I can ask the question. I can get an answer, a summary answer. In this case, of this 84-page document is summarized directly for me as an end user, I can further chat with it. You know, what is the CUSIP of this bond?
I can expand that chat window so that it takes the full page. It's gonna answer that. What is the maturity date of this bond? It's actually gone through all the tables and all the details of the document to give me that. How much is being borrowed? $5.4 million in this case. What is the interest rate? How is it paid, et cetera? You get the idea. Some of these are very detailed type questions that would take an operator quite a lot of time to find. In this case, it can go through not only the details of the document, but also all the tables, diagrams, et cetera, and extract the right information for me. Another example is documentation.
Here, we've applied C3 Generative AI on our own documentation as part of our Version 8 product, and we're making this available to all of our customers. Our documentation is fairly complex. There's formal documentation that's written by our documentation specialists. There's type-level documentation for each C3 object or entity that's a part of our release that's often written by developers. There's tutorials that we write, and there's many of these that are written with each release. There's in-depth guides. There's a full, rich developer community. Think about, this is our equivalent to Stack Overflow. It's called community.c3.ai. There's interactive videos, a video courseware, 101 courses, 201 courses, et cetera, that are often updated as part of each individual release, that encapsulate all of the features that we have in that release.
This is fairly complex because the platform is very rich with a lot of capabilities. This is a game changer for us internally, as well as our partners and our customers. I can ask it a question, for example: What HPO techniques can I use? Which is a fairly difficult question, and I'm able to get an answer directly. By HPO, I mean hyperparameter optimization. This is a question that a data scientist might seek to answer, and I can further chat with that, and I can even get code snippets. In this case, you're seeing an example of code snippets for grid search that I've asked it for. It's a really powerful capability. Product roadmap.
Switching gears to where we are and where we're going with this, we're on a quarterly release cadence with the Generative AI product. Our next release is in July, and I wanna highlight a few things. First, we're gonna be releasing three new fine-tuned LLMs for specific tasks. One is for question answering, so an LLM that basically is used to orchestrate the question-answering tasks as part of the chat or search capability. Second is for summarization, to summarize very long documents and data sets. Third is for comparison, to compare things to baseline.
We're also excited to be releasing a true generative data visualization capability, so think about this as seeing a data set into an LLM that's fine-tuned to see a data set, interpret a data set, and then generate the right chart format on the fly. Improved administration flows, and then multi-LLM chaining and orchestration will be part of our July release. Highlights in October. We're gonna be releasing fine-tuned LLMs for manufacturing and oil and gas. A second big area of focus for us, is we have. You know, we know how to go from about order of six customers for generative AI to order of 60. We know how to do that very well, but we really wanna go from six to 60 to 6,000 customers with this product.
We think that this product can actually be self-service, where people can launch it from a cloud marketplace, from one of our cloud partner providers, and basically, set up their users, set up their data sets, set up their configurations, and be off to the races themselves. We're super excited about self-service deployment. That will be a big theme of our October release, is making it so that people can actually launch and set this up themselves, and accelerate their time to value and accelerate, for us, the time to consumption.
We're also gonna be releasing in the October release, a fine-tuned LLM that is for code generation, for the C3 AI type system and DSL, and this will also be game-changing for all of our partners, Booz Allen, for example, and as well as for all of our customers that develop on top of C3 AI, 'cause it's gonna make it much easier for them to actually write their own applications or extend and modify our applications. Then the last thing I'll highlight in October is developer tools for third parties. This will make it easier for people to use our generative AI capabilities. In January, we're planning three more LLM, fine-tuned LLMs to be released.
One for defense, one for banking, and one for CPG retail. We're also thinking about very specific planning, specific capabilities for C3 AI applications. Think email memo generation as part of C3 AI CRM. Think about report generation as part of C3 AI ESG. The summarization of AI evidence packages for the smaller models we have in all of our applications, being able to summarize that for a human-readable interface. That will all be in January 29, January release, as well as enhanced multilingual support for organizations that have multiple languages. In April, we're thinking through fine-tuned LLMs for data generation, and fine-tuned LLMs for DevSecOps.
In terms of application support, in April, we're also thinking about additional capabilities for our supply chain suite, you know, coaching and recommendations as part of C3 AI, C3 AI CRM, as well as our law enforcement application. Lastly, multimedia support: images, videos, audio, et cetera. I wanted to cover a few examples, or a little bit more detail on our pilots. We had talked last quarter about three specific pilots, generative AI pilots. One with a large U.S. manufacturer, a second with a U.S. federal government agency. This is on starting with technical papers and then going on to other technical classified data sets.
Then third, with a large U.S. refiner, this is looking at price and volume data. This is looking at market data, related to crude and commodity prices. Double-clicking on the first one, what does a pilot, generative AI pilot look like? Typically, for us, this looks like less than 12 weeks in duration. There might be a design and analysis phase, where we're looking at what data sets to really bring in, how to bring that in with the customer, working with them through that. There might be structured data, unstructured data, sensor data that we might want to map and bring in to our object models. All of that is part of that design and analysis phase.
We're configuring the application and then bootstrapping the training and improving the performance of the end-to-end system. That's what the rest of the time looks like. These typically go quite fast, and they're, you know, typically three months in duration versus our standard pilots, which are up to six months. In this case, in the manufacturing case, we're bringing in standard operating procedures, which are very complex documents. Maintenance manuals. Oops, actually, I can go back. Maintenance manuals, recipe cards, and then data from OSIsoft PI sensors. There's tens of thousands of sensors per machine here that are mapped into our system, and then the generative AI user interface that I showed you up front. Let's double-click into that.
Here, I really wanna showcase the support for operators, for novice operators, to make them equivalent to experienced operators. I can ask, as an operator, any question. This example is showing me to me asking a question that's fairly complex, which is a procedure for Annubar cleaning. Now, this is a something that I would have to otherwise find in a specific procedure or maintenance manual. It's very difficult for the operator to know how they've done it, how to do it. The AI summary immediately shows me the steps that I need to go through, and not only does it show me the steps, but it actually links to the sources.
Please join us in our demo station after, and we can also give you a live demo of a similar system. That's a synthesized AI response across all relevant data with the sources, and there's no hallucination in any of this. The second thing that you'll see here is a ranked list of relevant search results from other documents. You can see some of them are documents specific to vacuum systems. Some of them are bearing manuals, et cetera, that are in ranked order of relevance. I can also further chat with the system in this example. It's basically allowing me to ask follow-on questions, and it's context-aware.
Each question that I'm, that the system is asking, answering based on what I'm asking it's actually aware of the original question that I asked. For example, it's, I can ask, you know, what is the frequency of cleaning, or what are the tools needed? Each time it gives me a response and then also links me to the source documents. Interactive chat with context and follow-up. This is hopefully giving you a sense of the power of the system. There's a AI summary, all of the sources, a ranked list of relevant search results, and then an interactive chat and context.
The benefit of this really is, this current process that is incredibly inefficient, and incredibly difficult for these novice operators to operate these very expensive machines, is gonna completely change, where these machine operators or techs can interact directly with our system, with the C3 Generative AI product, and access the right information at the right time. With that, I think, my last slide is how, a summary of how we are unique. On the standard LLM package, approach that we see from many others in the market, you have all of the data directly going into an LLM. The LLM might be fine-tuned on those datasets. I have problem with random responses, no traceability, no access controls, risk of leakage, and prone to hallucination.
On the right-hand side, with our architecture, you have all data, not just unstructured data, everything embedded and stored in a vector store, and then an LLM that's capable of chat, search, as well as orchestration, deterministic responses, full traceability, full enterprise access controls, no LLM-caused leakage of proprietary information, and no hallucination. So with that, I'd like to conclude this section, and I'll hand over to my colleague, Binu Mathew. Binu is responsible for product and engineering at C3 AI, and formerly from GE and Baker Hughes.
Thank you, Nikhil. My name is Binu Mathew. I run products and engineering at C3 AI, and I came to C3 AI from Baker Hughes, where I ran digital products, and then prior to that, I spent 16 years at PeopleSoft and Oracle. A major reason for the Baker Hughes joint venture, and for me to come to C3 AI 4 years ago, is that C3 AI is unique in its focus on enterprise business value. We're the only company that does it in this way. This is in sharp contrast to what you see out there, even today.
A focus on tools, a focus on technology, and effectively, science experiments that do not scale, because customers are really looking to collect data, to apply tools, work on individual models, and while you might actually get some very interesting results, there are very good tools out there, you don't get enterprise AI applications. You don't get applications that scale. You don't get applications that are attuned to business value. Our focus is very different. We start from the value chain. If you're looking at a large enterprise, and you look at: What are the business processes involved? How do you go through from customers to supply chain to manufacturing? How do the business processes work? How does AI apply to these use cases?
Once you have that, then you can think about how an AI application could be, should be built, what models you should focus on. You need that context, and you need the expertise, and you need the technology to do it in that way, or you do not have scale. Again, we're the only ones who do this. To do that properly, you need enterprise AI applications. You need applications that collect data at scale, that can be deployed easily, that have user interfaces and can actually be adopted. We have a very comprehensive suite of these. We've been doing this for well over a decade. We started in 2009. Our initial focus was in the energy management space, then, you know, utilities, financial protection. We expanded then into predictive maintenance.
We're really the masters at predictive maintenance now. We did that in many different cases. We expanded into financial services. Over the last few years, we've now built these up to 42 out-of-the-box, turnkey applications. You can deploy these applications, you will get value. We can do these in pilots. We will have a production application running for you in less than 6 months. Now, we've started aggregating all of these applications into suites. We now have enough applications where we're really taking a suite focus to the whole thing. If you take supply chain as an example, we have applications like inventory optimization, C3 AI Supply Network Risk, production schedule optimization, sourcing, demand forecasting, and all of these applications work together. We've deployed these at some of the largest customers in the world.
Part of where we're truly unique, you know, besides our overall focus on enterprise business value, we are also the masters of data fusion, in our ability to bring data from many different sources. To be clear, we're not in there replacing your systems of record. There are good applications to do that. We complement you, but we can aggregate your data from all of these sources. If your underlying sources are performed enough, we don't have to copy the data over, we can actually access it live from those systems, and we integrate that into one unified data image. What that means is that you can get a representation of your entire enterprise from one place. Think about this from a supply chain perspective. What that means is that I can look at a part that I've sourced from a supplier.
I can move that part through the sourcing process, I can move it into warehouses, I can track how it ships through logistics, I can get it to my manufacturing plant. I'm transforming that part into something else, into a product, and I'm tracking how that product then gets out to our customers. We can replay that all the way from the beginning to the end, when you can do that, then you can also learn and you can predict. This is where the AI portion comes into play. Our applications. You heard Nikhil talking about generative AI. We already have very powerful data fusion and predictive AI capabilities, generative AI is really going to make a difference in terms of how customers interact with these applications. We have very, very good-looking UIs.
We spend a great deal of time on our UX process. Generative AI, if you can actually start asking questions, for instance, "What is my on-time delivery performance? Which suppliers are late in deliveries?" That brings a whole level of interactivity and whole level of adoption that we can drive with this. You can expand, including with the generative AI process, into much more elaborate what-if scenario planning. You can continue expanding the ability to scale and deploy these applications. This roadmap, by the way, is illustrative of all of our other applications. Our Reliability Suite, built up with all of our experience from predictive maintenance in various industries, various companies. This is an area where it's an example, I think, of where all of our other applications will go.
At this point, our reliability application has been deployed at very large facilities at scale. That means you're talking about thousands of pieces of equipment, you're talking about billions of rows of data a day, and trillions of rows of data that are being analyzed in aggregate. That means that we can get an enormous amount of information by learning patterns of how equipment really behave. To contrast this with what traditional applications do. Traditional applications, and again, there are some good ones out there, but they are based on rules, they're based on how machines were designed, they're based on physics, which again, is very good from a design perspective. If you truly want to understand whether a machine is going to fail or not, you need to learn how that machine, that specific machine, is behaving.
Once you have that, then you can get a great deal of insight from that. What that means is that the amount of noise that you get, if you're dealing with a large manufacturing facility, you're inundated with thousands of alerts from dozens of different systems. We can pull that all down into one area. We can give you actual insights in terms of what's likely to happen, and that means that we can increase the availability of your plant by a couple of basis points, and that ends up becoming a great deal of value. Again, deployed to some of the largest customers of the world. You'll see a pattern here. We can integrate data from multiple different data sources, some traditional, some non-traditional, and pull it all together.
We do the same thing with sustainability. Some very interesting things are happening in sustainability. Sustainability is an area, by the way, where C3 AI really got its start. You know, we started working on energy management in 2009, energy and emissions management. With the current focus on climate technology, ESG, materiality, stakeholder management, our capabilities in generative AI, we can actually provide comprehensive analysis of data sources that are relevant. This is not us coming up with random invention factors. We can actually analyze the data. We can look at what's relevant from external, in terms of what's relevant to external stakeholders, and we can give you updated materiality assessments. Our energy management solution is integrated with our other tools.
For example, since many of our customers have implemented our C3 AI Reliability solution, the same data elements, the same sensor data, works immediately for sustainability. If you have scope 3 emissions that are coming in from a supplier, that integrates directly into our energy management solution. We even take this much further. We're working on AI decarbonization planning, going after supply chain emissions, as I mentioned. We can give you far greater insight into that than you can today, and we're going to expand that with generative AI capabilities. ESG reporting is a major burden for customers today. We can help you generate those reports. Our CRM solution, again, you'll see the same sort of pattern here. A couple of things that I want to point out with CRM.
Most CRM data or most CRM solutions today rely on what a salesperson is entering into a CRM system. What that means is the salesperson is going to enter just enough information to get their manager off their back. You actually want to see what's going on, what's going on with email interactions, you know, what's happening in the broader market. If you can pull all of those data sources together, unified image, you can learn from the behavior, you have the ability to optimize, predict what your revenue is going to be with a much greater degree of precision. You can intervene earlier. Our ability to help drive true sales impact, true sales growth, is much higher. Defense and intelligence. You heard Graham talk about what we're doing with the federal space and Booz Allen.
Again, this is an area where we've been established for quite a long time. We have applications like Readiness. This is, you heard Tom state that this is the system of record for the U.S. Air Force. We're active in the intelligence community, and in terms of tools, smart tools that we're providing to very senior personnel, and again, deployed at a wide number of agencies in the U.S. government. State and local. We have an application that is proving to be quite successful in terms of law enforcement. Again, being able to aggregate data from multiple systems, make it easily available to drive a criminal investigation, for us to be able to do data-driven property appraisals at scale, at speed.
All of these applications, the reason we can do so many of these applications, well, we have domain expertise. We have a great data science team, you heard with Nikhil's team. The biggest reason we can do this is because we have a platform where 95% of the code is the same, application over application over application. When we're coming up with a new application, it's really a declarative process. We need to know what the data elements are. We need a certain amount of domain expertise to model that. We can build models, but that's less than 5% of the code. The platform provides the rest. The platform allows us to deploy on AWS, on GCP, on Azure, on premise. Same code. The platform allows us to scale to trillions of rows of data.
The platform uses the same technology that we do for data fusion to integrate application platform services. Because we can integrate platform services in that way, it is essentially future-proofed, we're constantly moving the technological frontier forward with our platform. That platform has been given the top ratings by multiple analysts, the Forrester Wave, Constellation. Our products focus right now is on making our products as easy to adopt and deploy as quickly as possible. We have a very simple pricing model now. Our pilots, you pay half a million dollars. In less than six months, we will have that pilot deployed for you. We will have that pilot not just deployed, we will have it running at production grade.
That means that at the end of the pilot, you will have had something that has proven its value, you will have it live, and if you like it, you can just keep running. Again, doing that in the space of six months for an application of this class is not something you'll find anywhere else. Once you get into the production phase, I know that most of you are familiar with the consumption-based model. You pay for it by vCPUs. We continue to expand our talent base, because to build and drive all of these applications, you know, we have a very strong R&D team. One of the things that's unique about C3 is our culture. We have everybody in the office, and because of that, we have a very collaborative atmosphere.
We're able to drive, I think, a great deal of productivity by being together. We continue to go drive that through our platform technology and our application technology. In addition, we're building up new development centers. We have a development center that's coming up in Guadalajara, Mexico. It is very close to us from a time zone perspective. We have a great relationship with the government there, you'll see us increase our application capacity by a significant level as that team comes online. To conclude, we have a unique focus on enterprise business value. We have 42 out-of-the-box applications.
we have a platform that is a billion-plus investment that allows us to scale, accelerate, stay up-to-date in terms of technology, and we have a unique talent base. With that, thank you all.
There we go. Okay.
You did a great job. Very smart.
We have, there's some people with microphones and, we'll be very pleased to field any questions that you may have. Alex, do you have a question? No? Come on. You got one. I know you got one.
There have been coming in online.
Got online? Okay, well, I know Alex has one, so that's good. Let's start with Alex.
Put it on.
No, you gotta fill this. You gotta push the button on that thing, I think.
Try it one more time.
It's a hardware problem. Always comes down to that.
Thanks, Tom. Like the old days. I have maybe a couple of questions, from general to more specific. There's a point of view out there that, many comments about AI.
Could you hold the microphone a little closer to your mouth, please? We're having trouble.
Many comments about AI by the investor community, as well as companies we follow. Some comments focus on productivity, meaning headcount reductions, and other comments are on productivity in terms of velocity of development. What do you think takes precedence now over the next year: headcount reductions or productivity in terms of development, in how quickly can applications be put out? Second question has to do with, you talk about pipeline doubling and sales cycles decreasing by 20% or 30%. That tends to produce decent financial growth. What should we be looking at as investors to see this kind of productivity improvement in your own model as your pipeline doubles and sales cycles decrease?
Let's wind those backwards. The pipeline just has increased by 100% year-over-year since the end of the fourth quarter, in terms of qualified opportunity in the following 12 months. Our sales cycles have decreased from I think about 13 months to four months, okay, in the last few years. I think the shortest sales cycle we had was the one that Mitch mentioned that just closed yesterday, and that was four days.
That's getting in the ballpark. We're, you know, we clearly see a lot of interest, dramatically increased in C3 AI and, you know, aided by the consumption-based pricing model, which clearly, in the short term, would, you know, put downward pressure on revenue growth, but the medium and long term is a revenue growth accelerant. You know, we're pretty optimistic, Alex. As it relates to productivity, I have not been involved in any discussion, where people were interested in productivity as a means to reduce headcount.
They're just increased in, you know, increased productivity to get more, even more, product out of their factory, to be able to deliver, you know, food products on time, you know, to the right place at the right time, so that people don't starve in North Africa. You know, the US Air Force is getting, looking to get, you know, 25% more, you know, effectiveness, you know, out of their 5,000 aircraft. They're not laying off pilots, they're just getting 25% more airplanes in the air on any given day. Next, other question.
Tom, just to follow up on Alex's-
Yeah
Point , that generative AI burns a lot of computer cycles, and last I heard, you switched to a consumption model, so I would presume that should be very positive switch in pricing mechanism for you.
Ed, do you want to comment?
Yeah, I think that, as I kind of alluded to, the first step is just to get a pilot in place and get. To take the example that I gave, which is.
Reliability in one facility, basically scale up across all facilities in an organization. Consumption goes up. As you heard Honorio talk about, there's another application that gets deployed. Might be production schedule optimization. You roll that across all the facilities, there's more consumption. The answer is yes, but the consumption is commensurate with the business value that the customer is getting. If the customer is getting value, they'll use more of it, the consumption will go up, we get paid.
Paul, you have questions online or Amit?
Yeah. Our first question here is: any comment on the federal business? We heard from Booz Allen, but how is C3 seeing the market evolve?
Ed?
There's significant interest in our products in the federal market. I would say that, not unlike other large organizations, there were tens or hundreds of individual projects that were going on. Models were being developed, but there was an inability to kinda scale them up across these very large organizations, whether it's in defense or whether it was in intelligence. There's recognition now, and you heard it from Graham and Booz Allen, that what you need is an AI platform to not only scale these models in one domain, but also reuse the capabilities in multiple domains. Whether it's contested logistics, if you do it in one service, you can take that same capability and apply it across multiple services and scale up very quickly.
I think that's really the essence of the demand in federal and defense intelligence and civilian over time.
You could imagine that, you know, if we were recently selected as the program of record for predictive maintenance for the U.S. Air Force. You know, as a comment, I believe we're the only AI system to be designated as a program of record in any portion of DoD. This is the first AI system to be selected, to be designated as a program of record in any portion of DoD. This is a, this is a real milestone, but, you know, these people are looking at trying to save $billions a year, and, you know, those contracts are not really in place yet. There's some opportunity there going forward for growth.
There are more questions online, and keep going.
Go!
Another one here is: Do you expect to land mostly generative AI use cases because it is easier to get up and running?
Well, Ed, why don't you comment on the pilots for last quarter? I think actually, a minority of them were generative AI.
Yeah, that's right. I think we're seeing broad interest in all the applications. I'd say that's fair to say. Generative AI is a companion interface, if you will, for those applications. I'll go longer here, which is, I talked about reliability in a facility and basically scaling that up across all facilities. Now, that application, you have to train an end user on, and there's a specific maintenance operator that basically is using it. You can complement that with C3 Generative AI in the reliability realm, and you make the information now available to anybody in the plant, the plant manager, the head of manufacturing, the head of operations.
Everybody becomes a user of that information, and they can ask a question and get an answer without having to be trained on an application, without having to log into an application. We see generative as very complementary to the applications and just a broader net for AI.
This is an accelerant for all of our business.
Correct. That's right.
Another one here. You have clear leadership in the oil and gas market. How is C3 AI helping in other verticals like healthcare?
Who wants to field that? Who's up here? Is Kumar up here?
I'm here.
Kumar didn't make it. Oh, you're here.
He's here.
Life sciences.
Sure. I think, you know. Yeah, we have very strong leadership in oil and gas, but I would argue that if you look at some of the leading providers across manufacturer, manufacturing financial services, defense, intelligence, others, we're also getting leadership across those customer bases as well. In healthcare, we are starting in life sciences, where we're doing a lot of work applying the applications that we have, whether it's in reliability or supply chain, into that industry, and I think that's where we'll be able to scale from there as well. Our goal will be similar to oil and gas, right? Get the leaders within that industry, show economic benefit, move to the next industry and scale it, and I think that's what we're doing.
For whatever reason. Well, I know the reason, but it's clear that the utilities were the first-
Yeah.
The utility industry was the first industry to adopt enterprise AI. We can talk about that some other time if you want. The second industry that really was just kind of blown open to us was oil and gas, and that was primarily due to our partnership with.
Baker Hughes. You know, so that's been a hugely successful partnership that we are closing a lot of business around the world. I think in the long run, you're going to see You know, the enterprise AI is clearly going to be adopted by all industry segments, federal government, defense, intel, state and local government, chemicals, pharmaceuticals, consumer packaged goods, travel, transportation, pharma, you name it. Okay, so what the industry looks like in 5 years, the industry opportunity looks about like the distribution of GDP across those market segments, 'cause there is no segment that is not significantly altered and facilitated by the application of AI.
Yeah, thank you. Thank you for your presentations. Just wondering how you would evaluate your competition? Not a lot mentioned about that. Doesn't seem to be a whole lot. I would think that a lot of the SAPs and Oracles, and even the Accentures of the world, would have that on the front of their dashboard, and I'm not sure why they're not already delivering products and performing products and services like you. What do they need to do to catch up? Thanks.
Well, pretty good question, Graham. If SAP has a competitive product to us in any segment in which we operate, we're unaware of it, okay? If Oracle we're pretty familiar with what they do. Okay, if Oracle has a competitive product to us in any segment in which we operate, we're unaware of it, okay? We understand that they have a plan for a generative AI product with some third-party partner, so if and when that happens, they might be a competitor in generative AI. What was the third company, Accenture?
Accenture.
Accenture is a professional services firm. The way they make money, honestly, is to make these projects as complex as possible, okay, and as long and as hard with as many hours as possible. Now, we have these.
Not Booz Allen.
L ight organizations-
Not Booz Allen.
Like Booz Allen, who want to deliver solutions fast to their customers. This would be the antithesis of Accenture. That's why these guys are kicking it in the defense and intelligence community. I didn't take any shots at Booz Allen. I'm not going to do that, Graham. These guys are the best. But to our knowledge, I mean, tell me, when Ed and I, you know, invented the CRM market, you know, sometime in the last century, I mean, what took SAP 10 years, Oracle had 1,000 people on it for 10 years. 1,000 people, okay, on building a CRM product, and 10 years later, you had less than 1% market share. Same thing for SAP.
As CRM guys, it's four orders of magnitude simpler than this. I mean, it's trivial. This is nontrivial. How a decade later you could have less than 1% market share is baffling to me. You know, sometimes these big companies, you kind of y ou know, we have this innovator's dilemma problem, where they kind of miss the next big thing.
Tom, what about Palantir in the defense area?
Why doesn't somebody who will handle that professionally, Ed, where do I start? Now where do I start? I think Palantir again, does a fraction of what we do, and Tom showed the tech stack. They do the data fusion piece of the equation. They have much more of a services model, and so they typically, you know, the pilot projects that we deploy are 3 full-time equivalents, and we get an application up and running in production with users in a period of three to six months. That's, that. There is no other solution that's out there that gets that job done. In a case like BP, Palantir will send 100 people in for 10 years.
Correct.
Okay?
Yeah.
That's how they get paid.
Mm
100 people for 10 years. When the other difference between our business models is that when we're done with the deployment, the customer doesn't find the software company owns their data. Okay, that is kind of causes some friction in the market. You know, with our customers, the customer owns the data. With our customer, the customer owns all the derivative works. So Palantir's business model is a little bit different. I think they look to me to be, they have pretty good data fusion capability, they clearly have a lot of professional services capability, and they kind of are able to monetize the fact that they own the customer's data that the customer finds out a few years later, to its chagrin. Yeah, I did that pretty well, right?
Yeah, you did. That was very professional.
I didn't come out of my shoes. Amit?
Thanks.
Is there another question in the room? Right in the center row.
Right in the middle, all the way in the back.
Hi.
Hi, two questions. I wanted to kind of reposition the competitive question and think about it differently in the sense of in a world where.
A little closer to your mouth, please. Thanks.
I wanted to reposition the competitive question, and in a world where we are starved of GPUs, and if Facebook can't get GPUs, I would imagine enterprises are struggling as well, and will for some time. How do you guys think about your competitive positioning versus NVIDIA themselves and their DGX offering? Right, I imagine many enterprises would contemplate going directly to them and rent out their compute power in order to execute against their LLM desire.
I'm really having trouble hearing.
Yeah
Was it a competitive position versus-
NVIDIA.
Oh, NVIDIA is a partner. I mean, Jensen is a partner. Go ahead.
I mean, it's a different level of the stack. We're focused on the applications, and then we take advantage of NVIDIA technology. I think that was the essence of your question.
Well, no, DGX NVIDIA actually has their own base LLM libraries. Like, you literally rent them-
Oh
And use their libraries.
We'll use it.
I'm just trying to understand how you fit within that.
We're agnostic of the actual libraries. If a customer is using NVIDIA's LLMs, we'll take advantage of those. If the customer is using HPE's recent announcement with their LLMs, we'll take advantage of those. Bard, FLAN-T5, or any LLMs that come out basically are pluggable into our model architecture. These are all partners rather than competitors to us.
The second question I had was, as we move from more of a training orientation to this phase of AI to more inference, how do you guys foresee your position, as we make that handoff in the future?
Well, we've been doing inference for a decade now. All of the work that we've done in deploying AI applications are training and inference, and so that. We've been doing it for a full decade. I don't know if there's more to that question.
Yeah. Is it more additive or not?
Yeah, I mean, we know whether training or doing inference, it's just CPU cycles to us, okay? They're doing. Whether they're training the model or running the model, it's a CPU cycle, and we get paid for CPU cycle.
The computational power for training is significantly larger than it is for inference.
Yeah.
That's a fact.
Yeah. If I get in a-
Yeah, please.
We've always had a blend of training and inference in our consumption model, as well as in our base business case. We charge $0.55 a CPU hour for or $0.55 a GPU hour, depending on the workloads. All of our applications require both training and inference on an ongoing basis for both the small models, and we expect this to also continue with the as we scale the use of the large or the smaller language model in the future. I don't see really any change in the fundamental architecture of our compute profile, whether on CPU or on GPU, we just see a significant growth.
Thank you.
Thank you, C3 management team for doing this presentations. It was really helpful. My question is, again, going back to competitive nature. Merel talked about alliances, right? It's grown, you know, hypergrowth, right? Was wondering, like from a hyperscaler perspective, right, eventually, you know, like the large one, like Microsoft, right, they have a partnership with OpenAI, right? Can you comment on, like, you guys are friends at some points when go to market, but at the same time, they could build their own AI models as well, right? How do you balance that?
They'll use their AI models in our architecture, and they do. An AI model is not competitive with what we do. An AI model is complementary with what we do.
How about OpenAI?
I'm sorry?
How about OpenAI with Microsoft?
OpenAI, I mean-
Is also.
I s complementary. In other words, we have whatever we can use, Microsoft's LLM, we can use. Nikhil Krishnan, you take it.
You know, we can use the PaLM LLMs. We have our own fine-tune models. We can use OpenAI's LLMs. Keep in mind that the LLM is not the application. The LLM is one part.
Yes
Of an application. Even for our generative AI product, the LLM is just a small part of that application. There's so much additional capability. There's so much other technical work in that application: the vector store, the knowledge embeddings, the ACLs, the encryption, the interaction with database systems, sensor systems. All of that is part of our product, even in generative AI. We're actually LLM agnostic. We actually think there's gonna be a proliferation of LLMs. We actually try to track it very closely. There's a Cambrian explosion of LLMs that will occur in the next year.
Can I just follow up one quick, quickly? Like, how do you make sure, I guess, like, you know, when you go to market with Microsoft, right, instead of them the client using OpenAI's stuff, that you could able to steer to C3 AI's?
The same point, right? If a client wants to use OpenAI through, let's say, Microsoft, we can plug that into our generative AI product. It does not make sense, given an enterprise context, to use the LLM directly on their data. If they do that, they incur great risk: hallucination, stochasticity, or random responses, risk of LLM exfiltration, proprietary information exfiltration, all of the things that I mentioned in my, in that, in the talk. We can actually take advantage of an LLM, like OpenAI's GPT-3.5 or 4, as part of our stack if a customer wants to do that.
We are completely agnostic. I mean, whatever they wanna do, it's fine. Look at this from the hyperscalers' perspective. We're an application running on that cloud infrastructure. When we're running an application doing, for example, all predictive maintenance for the U.S. Air Force, let me help you out. That's a big application using. Or Missile Defense Agency, or Shell, which would be larger than the Department of the Air Force. We're using massive amounts of GPU hours, CPU hours, and storage. Those hyperscalers want that workload on their platform. That's why they want to partner with us to go close that deal.
'Cause you have the industry knowledge.
You know, we're raising the temperature of the planet, okay, literally. With the amount of capacity that we're utilizing for these applications.
We have applications that are rapidly deployable.
Mm-hmm.
That consumes cloud very quickly.
They want it. Microsoft like wants it on Azure, and Google wants it on GCP, and Azure wants it, and AWS wants it soon. HPE wants it on their cloud now.
That's why they want to partner with us. They want that workload on their cloud.
Yep.
Graham.
sorry, Graham Tanaka Capital.
Yeah.
Sorry, we're gonna give Juho a little, a little action here. Just wondering if you could share with us your longer-term target profitability model. We've talked a lot about the AI model, but your financial model, and it appears breakeven and you're hoping for breakeven by the end of next year, I believe. Is that around $80 million of revenue per quarter, it looks like? Maybe you can share that, plus really your target % gross margin, operating margin, et cetera, at what revenue levels. Thank you.
All right. Thanks, Graham. We shared in our last earnings call. Is the mic on? Just confirming.
Yeah.
Okay, perfect. In our last earnings call, we shared the plan to profitability, Q4 of this year, FY 2024, we're now in Q1. We plan to be operating profitable, cash flow profitable on a consistent basis there, thereon out, with the long-term goal of generating 20% operating margin in the long to medium run.
I can pick a question online here. There's a question: from an FTE perspective, is there any limitation on how fast you can scale pilots? In other words, do you have enough FTEs to support continued pilot expansion from here?
Who wants to comment? Ed.
I can take that.
Or Alex.
Sure.
Alex, that's your field.
Thank you. Can everyone hear me?
Yeah.
Yep, great. As we moved into this pilot, followed by consumption model, we've been hiring, you know, as needed, growing the number of FTEs we need to manage the number of concurrent pilots at any given point in time, plus the ongoing support services that we provide to customers based on what comes next after that pilot, right? Ed talked about scaling that first application out across an organization, scaling to adjacent use cases, using additional C3 applications, right? We're hiring only as fast as we need to support that number of concurrent pilots, which continues to grow over time.
The business, the forecast that we have given for this year, we have sufficient headcount to meet the pilots and to meet the revenue growth that Juho has talked about.
That's right.
To be non-GAAP profitable and to be cash positive in the fourth quarter of this fiscal year and beyond.
Talking about.
Time for more?
Yeah, partners.
Oh, I'm sorry.
Yeah, let me hit one point on partners too.
Okay, please.
In simultaneously with, of course, hiring our own FTEs, we are also enabling our partners to be able to staff pilots and projects that we're doing with customers. We have a formal training program for our partners in place, and many of them are going through it already, and I think Graham alluded to it. I believe Booz Allen currently has 50 people trained. You know, that is not directly for subcontracting to C3, but there are ways where we can very quickly staff up with partners like Booz Allen on many, many of our projects.
Thank you. There was a question in here?
Yes, there was a question.
Yes.
Hi, thank you for the presentation. Would you share some colors on international expansion? I assume that AI is spreading out like wildfire.
The international expansion?
Yes.
Yes. we are building a f rom day one, we've been building a multinational organization. We have offices in Rome, Paris, London,
Bangalore, Singapore.
Go ahead. Where else you got?
Singapore, Bangalore, you know.
Munich.
Munich.
Munich.
Yeah, Amsterdam.
You can expect that in a steady, Sydney?
Sydney.
You can expect that a steady state we would expect to see, 20% of our business in Asia, 30%-40% of our business in EMEA, okay, 40% of our business in North America, and likely 10%-20% in the defense and intelligence community. I know that, I think that adds up to about 110, but you get the idea. adds up to 100%, 110%. Maybe it'll be 110% instead of 100. Maybe we'll get lucky. Please.
There's a question: Could you discuss more about the self-service generative AI product coming in October? How does self-service look like?
Nikhil
The concept there is from the, from our partner cloud marketplaces, think Google Cloud, Amazon, Azure, being able to. A customer or a prospect should be able to find the C3 Generative AI product and then actually launch it and configure it themselves. Basically, map to their data set up their user, set up their permissions, set up their structures, and be off to the races. Be able to unlock value seamlessly by themselves.
Okay. What do you consider a healthy level of investing in sales and marketing? How do you balance that with growth?
Juho, you wanna field that?
Yeah. Again, on the long run, when we think about the long-term expense portfolio, we are planning, or would expect to invest about 10% of revenue into marketing, another 18% of revenue into sales. Obviously, during this time when we're going after the market aggressively, we are investing in the sales teams, a little bit higher in the shorter term, and marketing, of course.
What are your most popular applications today? How do you see that evolving?
The, our largest footprint today is in reliability. In reliability, I think it's almost non-competitive out there. I think we absolutely have the, I think we win a very high percentage of the company, competitive situations in there. Our second largest application today is in supply chain optimization, Supply Network Risk. The third largest segment, this doesn't have to do with the application. It's hard to tell actually what our largest market segment is today. I think in the fourth quarter, our largest segment was defense intelligence, if I'm not mistaken. It's, you know, before this is over, I mean, AI is going to be involved in every business process.
I mean, this is not a small phenomenon, this will be involved in customer relations, customer service, all sort of demand forecasting, demand chain, supply chain, asset optimization, cash management, customer churn. I mean, there's no aspect of business and government operations that is not touched by AI as this kind of phenomenon explodes in the next decade. I think that, if you look at the positioning of C3 in this space, I think, if I'm not mistaken, we're roughly a third of a billion-dollar business today. At some point in time, we're larger than that. The, you know, you know, this market is really big, okay? The worst case is, I think, obsessive. We'll look at the worst case for C3.
The worst case is that we're a relatively large and rapidly growing cash positive and profitable business in a huge and rapidly expanding market. How bad is that? That's not too bad. How many customers, how many companies will be successful in that market? I think quite a few. What's the best case? You know, the best case is that in one or more or in many of those segments, we turn out to be the market leader, in which case that best case would be pretty good. I think the worst case is, I think the difference between now realizing whether we hit the worst-case scenario or the best-case scenario is not gonna be to the competitive dynamics of the market. It's gonna be due to the execution of this team.
It's basically us, and our colleagues around the world, that will gate the extent to which we seize the opportunity.
Hey, Tom, a follow-up that kind of came in, you know, as a derivative of that. As it relates to reliability, is there any material difference between verticals in that respect? I think it's kind of inferred from the question: Is reliability applicable to all verticals, or is it more focused in some than other? I think that's.
It's a great question, and it's the same application. See, it's actually the same, whether we're doing it for the human heart, okay, or whether we're doing it for an F-35 Joint Strike Fighter, or whether we're doing it for a paper machine at Georgia-Pacific, or whether we're doing it for an offshore oil rig at Shell. We do all of those. No, we don't do the human heart today.
Yeah.
Every one of those use cases, okay, we have, okay, in production deployment today, and it is absolutely the same application. All that varies are from application to application are the data sources. Okay, the machine learning model will change. Maybe a different machine learning model, or certainly changed, trained on a different set of data, and the user interface expression will change somewhat from industry to industry, but it is the same application.
... across all segments.
I-
Binu, go ahead.
I can just add to that. As Tom said, data, the only things that change are the data sources. We have what we call asset templates for different classes of equipment, which drive machine learning pipelines, and this is how we can train models at scale. Those are really the only things that change. You might have a UI change or so for a specific type of equipment.
We have a user interface for oil and gas, we have a user interface for manufacturing, we have a user interface for chemical, and we have a user interface for aerospace that's used by the Air Force and the rest of it. It's the same application.
Thank you for the presentations. Just as generative AI takes off, how do you think about gross margins? Because on the inferencing side or API fees to call on LLMs, are you gonna charge a premium to maintain gross margins? How does that outlook change over time?
Well, it's a very good question. Thank you for asking it, because what's important to understand is this is being run in the customer's cloud, so it has no impact. The fact that they might be using greater CPU or GPU capacity doesn't affect our margins at all, okay? I think it's a shorter sales cycle, and it's a shorter implementation cycle. It actually should be increased gross margin, okay? As that becomes a increasing part of our product mix, it should increase gross margin. Thank you for asking that question 'cause it might have led to a misunderstanding out there.
Maybe one or two more, Tom?
Please.
In no order, how do you think about the co-opetition with Google and their ability to do enterprise AI apps with Vertex AI?
Merel?
The way we see it is that Vertex AI truly is complementary with the C3 AI applications. What we do in many of our products with any of the Google services, including Vertex AI, is we mostly leverage them as data sources. Anything you've done in Vertex AI, you can leverage within the C3 application. Now, of course, this sometimes leads to a little bit of confusion, but I think as you know, as we've worked on this partnership, we've really cleared up a lot of that confusion, and there's a lot of enthusiasm within the Google sales force to sell these C3 applications because they see that it generates Vertex AI consumption.
Another one here. Can you explain how you compensate and resolve hallucination problem with AI models?
Nikhil?
Yeah. The summary is it's related to the architecture. We're not asking the LLM to answer questions directly. We have LLMs basically interacting with retrieval models. Retrieval models are providing the relevant embeddings to the LLM, and then the LLM summarizes that in our generative AI stack, and we have the temperature of the LLM turned way down. Combination of these things, and our fine-tuning basically encourages the LLM to, if it doesn't have enough information, to just say, "I do not know." We, we really don't want the LLM to make up anything. We want the LLM to reason on available information, summarize and answer based on available information.
If not enough information is provided to the LLM, the LLM will reason and answer that, "I don't know." This is how we avoid the hallucination problem.
Tom, can you share with us some thoughts on what you think about the rest of the software industry and how they'll leverage generative AI? They don't have the AI models underneath to fully inform responses to questions, but I would presume everyone will begin to use this to improve customer support or perhaps change their own user interface. Thoughts on the industry broadly?
I think that it, you know, it varies from all this kind of yap out there, a lot of which is absolute bunk, and I won't comment on any names, okay? To the other extreme, this is highly credible, which would be Microsoft. I mean, these guys at Microsoft, this guy Satya, is clearly a genius, okay? You can see how they will leverage these LLMs in virtually all of their products, okay? In VS Code to help people write code, okay, in Microsoft Word, to help us all write documents, in email, to help us respond and craft documents, in their whatever their search engine is, okay? What is their search engine?
Bing.
Bing.
Bing? Okay. I mean, those guys are gonna leverage it in every product they have.
Yeah.
You see, there's one where they're gonna be able to use these LLMs, you know, incredibly powerfully right now, okay, to, you know, make their products, you know, even more competitive than they already are, which is pretty darn competitive as far as I can see.
They've opened it up to an ecosystem of developers.
What those guys will do quickly is really impressive.
Yeah.
You know, a lot of the rest of the, you know, claptrap out there, it's just all a bunch of yap. I mean, there's nobody who hasn't announced, you know, something dash GPT in the last month, right? You know, Microsoft, I mean, they're clearly gonna do it. Okay, I think, ladies and gentlemen, thank you so much for the courtesy of spending time with us today. We appreciate the opportunity to share our thoughts with you and give you an update and our perspective on the business. I think that it is today, June 22nd, 2023.
I think by the time, you know, I think there's some probability that by the time we get to June 22nd, say, 2028, you'll be looking at one of the world's great software companies. We thank you for your interest, and we look forward to keeping you posted. Thank you, all very much. Now we have, by the way, for those of you who are interested, that we have, I think we have four product demos going on, and I think that they're they have wine and cocktails or Coke or 7Up or whatever you want out there. Please join us for a refreshment. We'll be happy to, you know, show you C3 Generative AI or whatever you wanna look at.