All right, we will go ahead and kick it off. Thank you for joining us at the CrowdStrike session this afternoon at the Goldman Sachs Communications and Technology Conference. I'm Gabriela Borges. I cover security here at Goldman. Delighted to have on stage with me, George Kurtz, CEO and co-founder of CrowdStrike. Thank you for joining us.
Great to be here. Thank you.
George, I wanted to start a little bit with some of the conversations you had on July 19th, and specifically, what was some of the best advice you got in the hours and days following July 19th? And who were some of the mentors and advisors that you picked up the phone and called that were most valuable to you?
July 19th hit, and, there's not a lot of time for advice, and I think a big part of it was just, trying to do the right thing and get in front of it and, and kinda let people know what was going on. It was obviously a very fluid situation. So and there weren't a lot of people to call. It was more rely on instinct of, be open, be transparent, take accountability, and be able to communicate, you know, what it was and wasn't, 'cause people were wondering what was happening. So we needed to just go out and do that, and probably broke every, communication rule that was out there. But I think ultimately, it paid off, based upon the customer response and, how they viewed what we tried to do.
I think after that, you know, I got a lot of calls from a lot of different folks around the industry, and, you know, Andy Jassy called, and Marc Benioff, and, Bill McDermott. I mean, the list goes on and on, and a lot of it was, "You know, how can we help?" And I think that was super impactful and informative. I think people understood, you know, what we were going through and just trying to get customers up and running. But there was a tremendous amount of outreach from people that I've worked with in the past, that I respect, and a lot of it was around, "How can we help?" You know, there's a lot of companies that are bigger than we are.
They've been around longer, they've seen a lot of things, and they literally just wanted to extend a helping hand.
What's the answer to that question? How can they help?
I think a lot of it was, you know, just any advice that they may have had. Some of them offered, like, "Hey, we've got some of the best engineers, distinguished engineers. We're happy to sit down with your folks and, you know, just walk through what someone else does." It was everything from, "We'll send some folks down to help," to, "Call me if you need anything.
One of the comments that you had made at the time of the earnings call was the incredible number and volume and intensity of customer conversations that you've had in the last four or five weeks. Walk us through some of those customer conversations. How would you describe the level of engagement, and where does it go from here?
Well, level of engagement has been off the charts, as you might imagine. When I went through the IPO process, I had this 100 by 100, which was meet 100 customers with prospects in 100 days. I met 132, and I think after the incident, it was 100 by 2, which I mean, I had like 102 days. So we went through that, and if you think about the level of engagement and, you know, the seniority of people that we talked to, we compressed more in, you know, in a couple of weeks than the whole year. So you gotta look at the level of engagement.
You know, look, you know, you have tough conversations, but a lot of the conversations as we've gone through it, really. I'll take you through one, and a lot of it was representative, was: "How are you, George? How's the company? What happened? Why it happened? How are you gonna make sure it doesn't happen again?" And then, "We love your product." Like, you know, "You gotta get through this. Stuff happens. You guys have been open, transparent." You know, and as more information came out, we put the root cause analysis out, it became really helpful for customers, and I think that's been the overarching theme that we've heard back based upon how we handled it. We didn't blame anyone.
We took accountability, and what I know is what you know is we put it all out on the web for people to look at, and the whole goal is to work collaboratively with our customers, show them what we've done, the changes we've already made, and then making sure that we can move forward and being the best product that we were on the eighteenth and that we were on the twentieth, you know, right? It's... That's why people buy us because of what we've been able to do for them.
There was a little bit of confusion around the timing of the root cause analysis, and specifically, the outage update being perceived as a code update to a kernel, when actually it was more of a telemetry or configuration update. Explain the nuance for us a little bit. What's the difference?
Yeah, it is. There is a nuance to it, and there's a lot of misinformation, a lot of it by competitors. But it was a configuration update. So when you look at the software that was in use, we actually deployed the software in February, and it went through a full QA process and was dogfooded internally. It had concentric rings that it actually was deployed out to, and then it gets picked up in the customer environments as an N release, or an N-1 or N -2 . So we've been doing that for the last decade in a very robust fashion. And then, when you look at what happened sort of after that, is, we have the ability to send configuration changes to the agent, which basically tells it to send different levels of telemetry back to our cloud.
Now, you certainly can implement something like an indicator of attack, but in this particular case, it was just reconfiguring some telemetry, and that's the power of the crowd and the CrowdStrike, right? So we get this rich telemetry back, we use it in our algorithms, and they keep getting smarter and smarter. So in this particular case, we had a configuration change, which is like there's no code. It's just a config that the sensor consumes, and we went through a validation process, and we validated all those. They actually worked. The problem is we had twenty-one of them, and the sensor understood twenty, and that's the simple explanation of what happened. You know, so what have we changed in terms of the process?
We now run the configuration changes through not only the validation but all the various, code GA, QA processes we have, and then deploy that in a phased rollout manner, as well as, giving customers the choice on how they want to deploy that content. You know, when you look at what happened, it's like that Swiss cheese. You had to have a lot of things to happen to get the hole right through there, and it did, but lessons learned. I mean, it worked thousands of times for the last 10 years, until it, you know, we had this issue. We learn from it, we move on, and, you know, our goal is to be the most transparent and most resilient in this area.
Not only are customers asking us about it, but they're asking all of our competitors and everyone else in the industry. We think we have a real advantage to be the best in this area.
There's a little bit of a architectural question here around the lightweight agent that you have, and that's been so much of the strength of your technology since day one.
Yep.
Is there inherently a trade-off between having a lightweight agent and the number of updates that you then need to make at the kernel level or at the telemetry level to be able to support the agent? How do you think about the trade-offs between lightweight versus frequency of updates?
Yeah, we have to, we have to make the distinction between a telemetry update and the fact if you're, if you're trying to put a blocking in, and the blocking pieces, you know, and some of the IOAs are much far and few in between than some of the telemetry updates, but why does the product work, and why has it been so successful? It's because of the architecture, right, and when you look at these sort of technologies and where every security system has to run in the kernel, you know, we make it lightweight, we make it performant, we make it tamper-proof, and we have a very rich set of information we can get back, which allows the system to continually get smarter and smarter.
And when customers think about manageability, when they think about performance, it's much different in our system. We don't require, you know, three gigabytes of storage because of the way the architecture works. It's, like, a hundred megs. So it served us well to get to be the, you know, the number one product, by many accounts in the market. And, you know, in this particular case, you know, confluence of things came together to cause this issue. But, you know, customers depend on what we built and, are confident in our technology and architecture.
So you're back from the Microsoft meeting yesterday. Share with us your observations on how the meeting went.
Well, it actually went really well, and after the incident, I spent a lot of time with Satya from Microsoft. You know, probably talked to him once or twice, you know, a day for the first week or something, and they were super helpful. The whole team, we were collaborating, working together, and you know, part of the summit was really an offshoot of. They're an ecosystem, open ecosystem provider, and you know, we're one player in the security market, but how does the security market come together to think about other ways to extend that ecosystem and build more resiliency? And you know, there'll be more information in a blog that's coming out from them, so I won't go into all the details.
But really, the conversation was: How do you extend the architectures to provide additional resiliency, things that the security vendors can take advantage of, and others that, you know, make it more resilient? So it's like anything else. It'd be an evolution of what this looks like and how security vendors take advantage of it. And whole goal was to keep it open, provide extensibility, and, you know, help both us and Microsoft provide a vibrant marketplace to be able to do what we do. And they need us, and we need them, and they need the security ecosystem, and it was very collaborative.
There's an interesting dynamic here where I think about how Endpoint interacts with the Microsoft kernel, and then I think about something like a Linux or an iOS operating system, where there's no kernel access whatsoever. One of the more interesting statements in the root cause analysis was there could be significant work ahead for Windows to support a security product that doesn't actually need a kernel driver. So walk us through that a little bit. What do you think a world could look like where security products and other software products can run effectively without access to a Windows driver, and is that a realistic outcome?
You have to look at some of the other operating systems, and it's really important to realize that a Mac operating system is different than Linux world, is different than Windows. And Windows has a you know a kernel structure where like they build one kernel, and it supports all these different versions of Windows, right? And then there's a massive focus on backward compatibility. You can run really old programs on Windows 11, so it's just different. And I think people need to realize it's different, and you can't simply say: We're gonna do what you know was done in Linux, eBPF or something, and just apply it to Windows. So I think the approach you know collectively that people would look at is: How do you extend the current architecture that's there with additional features that the security community can take advantage of?
And there's really kind of four areas that are important. One is visibility and telemetry to get out of the kernel. Second one is the ability to block, which you need to do in the kernel. The third one is anti-tamper protection. And then the fourth one is performance. This is actually why you run in the kernel, and to realize that a lot of the attacks do take place in memory, and you have to have that visibility. So as those sort of key tenets are looked at, you know, as there's extensions that we can take advantage of, we'll do that. When we built our technology, it was originally built for Windows 7.
There's a lot of features that didn't exist in Windows 11, so you have to, like anything else, evolve over time, and we've used new techniques and technologies that Microsoft has added to be able to make our system, you know, more resilient and safer, as we, as we've matured the product and they've matured their own, operating system.
I want to move to a little bit of a discussion on your customer commitment program.
Mm-hmm.
You've talked about a $30 million subscription revenue impact to 3Q and 4Q. It's an extension of the Falcon Flex program, which you all had introduced at an earlier stage. So talk to us a little bit about how the customer commitments work in practice.
Sure. So let's talk about Falcon Flex, which is something that we really developed late last year, well, probably end of the summer, with customers who came to us and said, "Hey, we want to do more with CrowdStrike. We want to go all in on your platform. We need to make it easy, and you need to make it cost-effective for us to do that." So we sat down with some of our largest customers, and they said, "Hey, we'd like a burn-down model similar to an AWS, where we'll commit a certain amount of dollars. The more we commit, the bigger the discount, and then open up the entire product catalog to us so that we can pick and choose it." So the Falcon Flex, this is totally independent of July 19th, was a. You know, we were going down this path anyway.
In the last earnings call, we talked about $700 million of total deal value associated with Falcon Flex. People want it. It's easy to consume. It makes procurement really easy once you get through it, and, customers want to do more, and they get better deals. So when we looked at that path that we were down, we said, "Okay," in a Customer Commitment Package, "Well, what's the outcome?" The outcome is, look, we know we had an impact, so how do we go to our customer and say, "We want to do the right thing, and, you know, what can we do for you?" I mean, this is a business conversation, and we want to be a long-term partner, and we've got many long-term customers that are out there. So we want to be proactive and go to them and have this conversation.
We essentially look at the impact and understand what it is, and then we needed something that was formulaic, if you will, at least had guardrails, so we said, "Why not use the Falcon Flex program, where we can fund Falcon Flex dollars that we would fund into a pool, and then we can offer that for things like new modules, right? Or you can extend the duration." Better a new module than extending the duration, but, you know, it depends what the customer wants. You know, we've got flexible payment terms and the like, so the whole idea was, what can we go to them with to show goodwill, and it's been very well-received, and then certainly have the discussion, you know, would they like to do any more with us? Would they like to, you know, put more dollars in the Falcon Flex pool?
Then we go through demand planning. So that's the mechanism that we're using with Falcon Flex, and we've got various tools which, you know, you can say it's discounting. We'd rather not just purely discount. We'd rather put it into the Falcon Flex pool and then kind of go from there. But everything is really a discussion with the customer, and we're trying to solve the problem that they have.
How did you arrive at the $30 million quantitative amount?
Well, that's a Burt question, but. And he's not here. So I think when you look at what we try to do with the $60 million across the second half of the year is, you know, there's going to be headwinds. There's going to be. If you're giving something away, there's gonna be some level of contraction, right? So we try to build in our best guesstimate of what that would look like in terms of those headwinds. And, you know, I think Burt was pretty clear on the earnings call, was, you know, we're trying to give some framework of what we can see, and there's still a lot of work to do and a lot of things that we have to go through in working with customers on the customer commitment package.
So we tried to put something down that created a framework that, you know, both the buy side and sell side could look at. And as we get more clarity around that, obviously we, you know, in future interactions, we'll be able to articulate a little bit more what we're seeing and how the customer commitment package is being adopted.
So the beauty of Falcon Flex is it's designed to have customers use more of CrowdStrike. And so there's a glass half full, glass half empty here, where could there be a glass half full, where customers actually come out the other side of the customer commitment using more CrowdStrike? And how do you reconcile that with a customer saying, "Well, perhaps I don't want to expand with CrowdStrike right now, because I want to figure out where things stand in the next six, twelve months before I recommit or renew at a higher rate?
I do think there's a long-term opportunity for CrowdStrike because the more module adoption we have. You know, you've seen our gross retention rates, which are some of the highest in the industry at 98% plus, historically. And when you look at what customers. They’re still coming to us saying: Hey, we want to do more. We still want to consolidate. We want to save costs. We want better protection. We want ease of use. So all of those things are still in play, and the customer has flexibility when they want to use it. If they want to just extend the term, fine, we'll let them do that. If they want the modules, if they want to add endpoints, you have customers that are buying companies, makes it easier for them.
So each one is really a business conversation, but I think there is a long-term benefit to both the customer and us because they were down a path of wanting to use more with CrowdStrike. We were down a path to helping them consolidate, and ultimately, you know, in terms of cost and complexity, we're going to be able to reduce that. And I think this is a way to just accelerate that, to get through something where we can go, "Hey, we're putting skin in the game because of this incident." And there's a short-term and a long-term benefit for the customer.
... Absolutely, so one of the comments from the earnings call was around the potential for re-acceleration in the business next year. What are some of the milestones that you and the team will be looking at to determine whether the worst of the impact of July 19th, whether it's pipeline churn, upsell, et cetera? What are some of the milestones that you'll be looking at to determine the worst of it being behind you?
I think a lot of it's going to be on the net new ARR, which is obviously a big driver of what we do. So, you know, you're going to see headwinds around that in the short term, and then as you roll across the comps in Q3 and the, you know, the Falcon Flex, you know, pool of dollars that we create, as that starts to burn off, there's going to be a natural conversation of, you know, how do we extend those modules? And historically, we've got great attach rates, and customers, if they have a module, they generally don't get rid of it, right? So then it's a natural conversation of, "Okay, let's talk about the Falcon Flex pool. You can add more to it. You can upsize it.
We can give you different discounts depending on what you commit." So it actually just is a natural conversation that the customers are already attuned to because they're doing that with an Amazon or a GCP or what have you. So it's in line with the way they're purchasing it, and I think a lot of the procurement groups realize the more you commit, the better the deal, and the more you're using, you know, the more you're committing and the bigger the discounts, and then obviously, all the benefits that come from it.
I want to spend a couple of minutes more on the go-to-market before talking about some of the more product-focused questions. On the go-to-market, one of the dynamics that we've been debating with investors-
Mm-hmm
... is this idea that you can have folks that are very close to the CrowdStrike product at the customer, that are huge champions of you internally, but you could have folks higher up in the organization that look at the business impact of July 19th, and you get a little bit of tension between those two cohorts.
Mm-hmm.
Have you found that in your experience, and how do you navigate the blind spots that may exist from you all having an excellent relationship with the people who are closest to the product, going to renew a deal, and ultimately having a blind spot on someone higher up in the organization?
Mm-hmm. Well, we have a lot of champions and, you know, CrowdStrike lovers, right? And, you know, some of the conversations, I'll just recount one of them, which was with a large financial services company, who had to go present to the board, and he already did it. And he said, "You know, it was a relatively simple conversation." I mean, you have people that ask questions, of course, and the board is going to do their duty, which we get. But he basically said: "We've got, you know, ten years of CrowdStrike, you know, on the left side of the ledger, making a lot of deposits of, you know, saving them from ransomware and all kinds of attacks.
You know, we had a withdrawal on the 19th on the right side. He's got a sheet like this of all the great things, and we've got the withdrawal on the 19th. He said it was a very simple conversation, answered a bunch of questions, and ultimately the board said: "Look, you know, this is the best product. We trust you, and you know, carry on." Right? That was one example, and you know, I'm not in every board conversation, but I have been in some where I've been asked to show up and explain what happened and went through it, and I think, well, you know, what customers recognize and appreciate is how we handled it.
You know, I think we'll be remembered for how we handled it, not necessarily the incident, and that's the sign of a good partner, and you know, that's the way we're trying to approach it, so I don't have, you know, I'm not in every board meeting, and I don't have control over every one of them. I'm not there to speak, so the biggest thing that we can do is arm our champions, which we have, and allow them to articulate that, you know, the root cause analysis was out. We've identified and addressed all the issues in the root cause analysis, which we have, and they're able to articulate that back, and then we go from there.
Many of the goals and conversations you're talking about here on improving customer engagement and the technology roadmap over time, those are the same goals and conversations that we were having this time last year.
Right.
Is there any nuance to how the agenda is changing at Falcon in next week, on any changes that or priorities into Falcon that perhaps didn't exist before July 19th?
I think a big part of it will be on resiliency, right? So for, as an industry, not only in security, but in lots of other technologies, there's a lot of things that happen, that people want to know about, right? So a big part of it will be how can we help them get visibility into, like, their entire ecosystem? Because a lot of things that go on, in an ecosystem that customers want to know, we have visibility to it. And then spend some time on, you know, how our goal is to be the most transparent and resilient in these areas, which we think can be a competitive advantage. We have a lot of customers that look at this and go, "Okay, like, you had an issue here. Clearly, you identified it.
You talked about it." And, you know, their words, "If we were to, you know, hazard a guess, you're probably not going to have another incident because of the focus on this, right?" So it then puts the onus on everyone else in the ecosystem, and I think a big part of what we want to do is to come out, you know, stronger and better company, look at every process, understand there's always things that can be done and enhanced, and, you know, learn from, you know, some of the, maybe the larger players in the industry, some of the ones that I mentioned, and, you know, how we continue to be the best in our areas. And that, that'll be our goal and a lot of our message for next week.
I want to shift to a couple of product questions. So this time last year, you talked about being a real estate investor-
Yeah
... specifically on the value of your real estate on the endpoint. So I wanted to ask specifically about your roadmap and observability.
Yeah.
Last year, you talked about being at level one and level two of five potential buckets of functionality. Share with us the milestones that you've achieved in observability and where your roadmap is going next.
Yeah, when you look at where we are today and the LogScale technology, when we originally acquired the company, Humio, more than 50% of it was focused on observability use cases. And we still do that. I would say really the last year was focused on Next-Gen SIEM. That's really where we put our effort. So we're still, I would say, level one and level two in observability. But now that we've got everything integrated, and if customers want Next-Gen SIEM, it's all natively built into the platform. And then if they want to go beyond that in terms of sort of data lake and extend the use case to just about any kind of data that you want, including some of the observability, can get a full module of LogScale.
So I think we're in the sweet spot for where we are today. It's like, let's make sure we get next-gen SIEM right, and we've seen a lot of momentum and traction around that. And then, you know, we'll continue to build it out, but because of the success we've had with, like, some really large banks around next-gen SIEM, they're now showing it to all of their IT brethren and saying, "Look how fast and look how capable this technology is," and they become our internal champions.
I want to touch on some of the case studies that you've talked about in next-gen SIEM. The numbers that you've disclosed around next-gen SIEM, I think it's north of $220 million ARR. How do you think about the split between customers that use your technology as an augmentation to their existing SIEM versus a full rip and replace? And what's the barrier to get the augmentation customers into the full rip and replace category?
We have plenty of customers that start with augmentation because if you have a SIEM, you know, there's processes and institutional knowledge that's built around it, right? So we don't have to come in and say, "Just rip out what you have for the last ten years." We certainly can augment it. And, you know, one representative example is a large financial services company that was spending a lot of money with, you know, a Legacy SIEM provider, and we basically came in and said, "We'll be your data lake. We'll ingest all that data." We cut the bill to a third, and then we down selected and sent the relevant data to their SIEM. And that gave them time to be able to...
You know, they don't have to rip everything out, but they're getting a better deal. They're having faster processing. They're still using our technology for lots of queries, and then over time, we've been migrating the queries over into LogScale. So you have to take a thoughtful approach, similar to what we did with next-gen AV. We didn't come in and say, "Just throw McAfee and Symantec out." In the early days, we said, "Just run side by side," and then people realized how capable we were, and then it was like: Okay, do we really need some of these other technologies? and we'd expect that market to probably unfold in a similar fashion.
I find it so interesting that the early case studies are in financial institutions because you could argue in some ways we're early adopters, but in many ways we're not. So how did it come to pass that you had early momentum in financials?
They have the data, they have the money, and they have the product, and they use it at scale. Like, you know, we've got so many big financial services companies that use our technology, and they generate so much first-party data, that they wanted to take in this, all this third-party data, and then even with Charlotte AI, we can look across all the first-party and third-party data and automatically create incident reports and connect dots that have never been connected before, so when they look at the benefits, they were, you know, getting better efficacy. It was way faster. I mean, talking about, you know, sub-second response versus, like, two days, and you know, they were cutting their costs, and that was a real win to them.
Those are the folks that are, like, they have the data, and they want to be able to use it in different ways.
So you mentioned Charlotte AI. What is some of the early feedback that you've been getting on Charlotte, and are there a couple of areas that you think, "Okay, it's not quite where it needs to be yet, but let us iterate on it, in two to three years, it's going to be much more meaningful?
Sure. Well, I think when you look at this market, whether it's us or anyone else, we're still really in the early innings of generative AI and Charlotte and what it does. But the goal for Charlotte was to be much more than just a chatbot, right? So the way we architected Charlotte was we built it as a foundational service within the Falcon platform, and it's actually built into the workflow. So the whole idea initial concept was: How do you take a tier 1 analyst and turn them into a tier 3 and take eight hours of work and turn it into 10 minutes of work?
What we found is, yeah, we can do that, but we actually found huge adoption in the tier 3 analysts, where they're the power users, and, like, "Well, okay, we know what we're doing, and we can just whip this stuff up, and we can save a bunch of time. And we can go through and look at the output." And then before they move it into workflow, they're confident in the fact that, you know, we found the right things, and the workflow is going to be representative of what we found. And that's a big thing with generative AI. As you probably know, anyone who uses a ChatGPT or the like is, if you ask the question three different times, you get three different answers.
In security, you really have to have a deterministic outcome, which we've built a lot of guardrails around that. So we're finding that the tier 3 analysts are really loving what we're doing, and they understand that the output is actually accurate, because somebody needs to look at it and go, "Okay, what CrowdStrike said happened, happened, and what they're going to do is realistic." The tier 3 analysts, you know, can get through that pretty quickly.
Do you have the ability to track usage and look at usage patterns for the analysts that are using Charlotte?
We don't necessarily know who's a tier 1 and tier 3.
Okay.
So it becomes a little bit more difficult. But yeah, we try to track who's using it, how they're using it, the main use cases. And, you know, again, a lot of it is gathering data and then, you know, creating incident reports, automating. We have a whole incident workbench that's now automated by Charlotte. Like, you literally can say: "Hey, we want to define..." You know, or "We have a particular..." Might be just a malicious file, and then it actually builds the whole incident around it and how it happened, where it came from, who touched it. And it's all the different elements that we have. You know, was there any data associated with it? Where did it come from? What identities are in use? And it just continues to build out as it continues to take more data in.
I'd like to ask the AI question from a different lens, which is, if you think about some of your largest customers, Fortune 500 Global 2000 , as they've been figuring out what their own internal AI roadmaps look like over the next couple of years, how does that change the conversation on security? And are there examples of customers saying, "We really need to level up," or perhaps, "Actually, we need to push out security investments because we need to figure out what we're doing first"? Any color around those conversations?
I don't think they're pushing out, like, core security investments. I think there's a lot of talk around: how do you create the equivalent AI CI/CD pipeline? Meaning, you know, you just can't take a generative, you know, frontier model and say, "Okay, now we're just gonna use it," right? You have to build a lot of structure around it, similar to we did, which is, you know, the data provenance and governance and the privacy, and to get to a deterministic outcome, you need other things. You can't just, you know, turn a model loose and hope for the best, right? So you have to have the ability from start to finish of gathering data to training it, to, you know, doing the inference, to delivering the right outcome, to being able to wire it into a workflow.
And I think there's, you know, a lot of companies, startup companies, are kinda working on that as the next evolution to help enable, AI to really be used en masse in the enterprise.
I wanna end with a question around some of the more exciting technical problems your team is working on, actually putting generative AI aside, because AI and machine learning have been part of the CrowdStrike-
Sure
... technology expertise since day one. What are one or two of the most exciting technical problems that your team is working on today?
We're doing, I think we're doing some really interesting things around threat detection and using AI around threat detection, specifically in social engineering attacks, listening to voices and the ability to actually understand if somebody's getting socially engineered and then being able to call that out. These are just proof-of-concept technology that we've working on, but we've been working pretty close with NVIDIA on that as well, and you know, leveraging a lot of their technology and the go-to-market partnerships that we have. It's really. You know, part of security is obviously the technical piece, but there's always the Layer 8 problem. You know, Layer 8 is the human between, they call, they say between the keyboard and the chair, and a lot of these attacks that you read about are socially engineered attacks.
You know, people giving away their credentials or getting access to a MFA-type system, getting it on their phone. So we're looking at all these different vectors and figuring out: how could we use AI or other technologies? You know, even just exploring what we can do, and you know, some of the results are incredible. So these are proof of concepts, but these are kinda the cutting-edge things that we're looking at across the platform.
When you say results are incredible, do you mean it being able to predict threat intelligence, or-
Yeah
share a little more?
To actually be able to make it more predictive, whether that's in the data we consume or... This was just a particular use case of looking at, you know, sort of voice recordings of these sort of socially engineered attacks, right?
Right.
And then leveraging... I think we have the data science team. We've got the models. Like, how do we leverage what we already built to things like people wouldn't assume that we can necessarily do that across voices, like, 'cause normally we're just dealing with data. So the team is, you know, again, focused on the outcome, which is stopping the breach, and part of the breach in today's environment is you're gonna have somebody socially engineer you. Is there a way to get in front of that? And that was just kind of an internal project that some folks came up with, with some pretty promising results.
That's really cool. Well, please join me in thanking George for his time. George, thank you for being here this afternoon.
Thank you. Thank you for having me.
We appreciate it.