International Business Machines Corporation (IBM)
NYSE: IBM · Real-Time Price · USD
231.98
+0.90 (0.39%)
At close: Apr 24, 2026, 4:00 PM EDT
231.20
-0.78 (-0.34%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

Status Update

Oct 3, 2024

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Hello, everybody. Welcome to the IBM's cybersecurity services webinar today. We're gonna be talking about using AI and automation in security to help mitigate the impact of data breaches, as well as talking about. Well, let me get to the agenda slide, and I'll cover what we're gonna cover today. But first, let me introduce myself. I'm John Villacis. I'm in product management within the cyber security services threat management portfolio, and Chris is with me today. Chris, I'll let you give a shout-out and introduce yourself.

Chris Thompson
Leader, X-Force Red Team, IBM

Hey, Chris Thompson, I lead the X-Force Red Team, and, part of that is the offense-

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Sure

Chris Thompson
Leader, X-Force Red Team, IBM

Generative models and, all that stuff, so excited to speak with you all today.

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Very cool. So let me break down what we're gonna cover. They always say, "Tell them what you're gonna tell them before you tell them it," right? So we're gonna talk about AI, the intersection of AI and cybersecurity. There's two sides to every coin, right? So there's good things that can come of it, and of course, it's an opportunity for the bad guys as well, too. We're gonna talk specifically about what we at IBM are doing to leverage AI in the security discipline, as well as some other automation that we're coupling with AI. So we'll show you and talk about some of those assets that we're building, not only through IBM Security research, but also with our clients, that we're co-creating together as we explore what generative AI can do for security practitioners.

Then Chris, you know, with all his deep knowledge and expertise, is gonna cover the security concerns around AI. And what you should be doing to make sure that AI that's in your business covers the full scope of security concerns there, from, you know, the model, the data, the infrastructure, end-to-end. So, Chris will cover that with much more depth. And then, obviously, there's, as part of the platform here, there's a Q&A capability, so you can drop those questions into the chat into the Q&A function. We will try and catch those as we go along. At the end, we'll circle back. If there were some, you know, top questions or common questions, maybe Chris and I will try and verbally expand on those. IBM Institute for Business Value.

Yes, you know, companies are using them for business process automation. Some are using virtual assistants, generative AI, conversational virtual assistants for things like customer service. There's fraud detection, but 29% of those that we polled and responded said that they're using AI at that intersection of security, and more specifically, we got a lot of responses back talking about threat detection, which you guys know if you're a security practitioner, we've been using things like machine learning to do threat detection for years now. Anyway, the AI is being used across, and that's another reason why Chris needs to talk about how to secure that AI, because it's more than just at the intersection of security.

So business leaders are anticipating financial returns, and so I, you know, I think from a business perspective, we've seen a lot of different approaches in our client base to AI. Some are saying: "Look, I don't wanna build a data science team. I don't wanna own an AI pipeline. I'm gonna consume AI as a SaaS." Look, we need to, you know, take those foundational AI capabilities and build discrete capabilities ourselves, deliver those, and obviously, and Chris is gonna cover some of this, as you move across those architectures, as the business says, "Yes, you know, we're we expect a very large ROI coming from our AI capability that we're investing in," the risk increases as well, too, right? As with many things.

So when you own everything end-to-end, as opposed to consuming it SaaS, your risk profile changes, but those returns can stack up, and that's why the business is progressing, you know, on expanding AI capabilities throughout the enterprise. And we're seeing that. We're seeing with our clients that multiple functions, that first slide that I covered, multiple functions are co-sharing AI capabilities, generative AI capabilities, to build integrated, you know, business processes inside companies and to serve their customers. So you know, and this is, since AI, a lot of AI is about data, we figured we'd grab the Cost of a Data Breach Report. You guys know, hopefully, that every year IBM produces the Cost of a Data Breach Report. And it's out. If you missed it, it's out.

You can go to IBM.com and download the Cost of a Data Breach Report from this year. But, you know, in lieu of the entire report, presenting the entire report to you today, let me cover some of the highlights.

Chris Thompson
Leader, X-Force Red Team, IBM

John, we're having quite a few audio issues on your side. You might wanna go off video just to save bandwidth, when you bring up the slides, just 'cause it's pausing every couple of minutes here.

Oh, oh, is it, Chris? Okay, great. Let me see if I can... How can I turn off my video? Gosh, I'm sorry, guys.

It's bottom right if you hover over the screen. While John gets his audio sorted out, we'll just be just-

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Chris, do you wanna hop to section three and do your part, and then we'll double back to me while I work on the audio and video issues?

Chris Thompson
Leader, X-Force Red Team, IBM

Yeah. Sounds good. Might need a reboot or whatnot, so let me-

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Okay.

Chris Thompson
Leader, X-Force Red Team, IBM

Just share here. Thanks for the patience, folks. All right. There we go. Let me get the slideshow going here. Current slide and swap those. All right, so John's gonna work on his audio. I'm gonna focus on the later part of the presentation around offensive testing of AI and where automation comes in, where manual testing comes in. For some background, the X-Force Red team has a lot of experience, you know, around offensive security. We spoke at Black Hat six times in the last year alone and we're constantly, you know, researching emerging technology and the impact to the attack surface of the organization, and whatnot. That's no different when we talk about, you know, AI.

We've been leveraging AI internally, as well as developing extensive tooling and methodology for testing of AI over the years. You know, typically, what we see in customer environments as customers start to leverage AI, and they start to slap AI on every, you know, traditional and new application, this is typically what it looks like. You have, on the far left, a MLSecOps pipeline, and that could be something as basic as just, you know, an existing model that hasn't been tuned, that you're just leveraging a little retrieval augmented generation to call internal documents, like an internal knowledge base, for example.

Or it could be something, you know, a little more complex, where you've tuned an existing model, to, you know, focus it in on your enterprise knowledge and the area, that the app is focused on. So if it's a banking application, you know, limiting its responses and its focus to, you know, calling your back-end APIs to make certain transactions happen. Or it could be something as complex as, a model training and tuning environment, where, you know, you're building your own small 1B, 3B models and, you know, leveraging GPU infrastructure for training. But typically, these ML SecOps environments, they've been readily connected into enterprise data lakes and more sensitive data sets.

So financial data, customer data, internal intellectual property, they can reference, you know, more complex or sensitive procedures, in terms of, you know, if it's a banking application, how the procedures for setting up a new account or, you know, typically, you know, lots of different enterprise data sets that get called. And it doesn't necessarily have to be a customer-facing application by any means. It could be an internal application which is intended to, you know, automate some HR functionality or, you know, a helpdesk functionality. And the second piece of the pie is we have the model itself. So the model, you know, whether it's a Llama model or a custom model or, you know, an OpenAI model, you know, how do we ensure that the model itself is, doesn't have vulnerabilities within it?

How do we test it for safety and security? One key thing I missed on the MLSecOps pipeline is typically data scientists are downloading, you know, a couple hundred models throughout the course of the year to evaluate their effectiveness and their efficiency and their fit for the solution. Normally, they'd be downloading these models from sources such as Hugging Face, and there's always the potential that, you know, somebody backdoors a model. And you know, a malicious model could be detonated in this environment. I wanna make sure that we're scanning those models for malware, you know, automatically.

But we're not fully reliant on model scanning to be that silver bullet, 'cause, you know, just like any antivirus or EDR, there's gonna be, you know, a lot of false negatives and misses. And, you know, we wanna make sure that that environment is prepared and ready, you know, for the event that the malware, you know, detonates within it. So how do we ensure that the incident responders know where the logs are? How do we ensure that we've locked that down, that environment properly? And I'll talk about a lot more about that in a minute.

Beyond the model safety and security testing for prompt injection and the ability to, you know, produce content based on copyrighted works, or worrying specifically about bias, which is extremely important when it comes to, you know, any application but especially those that make decisions based potentially on somebody's marital status, based on their ethnicity, based on you know, any number of sensitive data sets that this should not play into a factor if somebody gets approved for a loan, for example, so very important that safety and that model ethics testing is performed. Third piece is these platforms that these models are being run on top of, with generative applications built on top of them so think of your BigML, your Azure ML, your SageMaker, your watsonx.

How do we ensure that the platform has been configured securely by your team or by a third party? And how do we make sure that the connections between that AI as a service platform and your internal data lakes and any internal APIs that are being called has been provisioned securely, the identity and access management for that cloud environment is properly configured? You know, all those different expanded attack surfaces that come up rolling out a new cloud solution, essentially. How do we ensure that the Gen AI apps that there are, or ML apps that are built on top of the AI use platforms leveraging those models are secured? And how do we know that these models are, these applications are securely calling APIs?

How do we ensure that they're not subject to prompt injection, which could result in code execution in your backend platforms, all that sort of thing. So these are the areas that we're most concerned about with AI. Typically, when you hear about AI red teaming, it's focused purely on number two there, the safety and security testing of the model. But in reality, there's a much wider ecosystem that we need to consider, especially those models being run in a production application. So securing the MLSecOps pipeline, you know, how do we ensure that that environment is secure? So in addition to, you know, the model training and tuning tools, you know, how do we ensure the deployment orchestrators are secure? How do we see that logging is in place? All that sort of thing.

So, you know, we can take into consideration a number of frameworks that have been started, like the OWASP and MITRE ATLAS and whatnot. OWASP has got two projects specific to LLMs and ML, for example, and they've started to categorize some of these attacks. But obviously, as we know with MITRE, MITRE ATT&CK and, you know, any great industry effort, there's gonna be gaps in how the attack actually happens practically at the procedural level. There's gonna be a lot of solutions that aren't just limited to, say, supply chain attack. So when we're testing an MLSecOps pipeline as a red team, we're focused on it much like any DevOps pipeline, because those pipelines have the ability to spin up new boxes. They have a lot of secrets built into them.

They can be potentially abused for lateral movement or privilege escalation, and different with MLSecOps versus just DevOps is a lot of these environments are, A, built on top of very new code that's built by smaller data scientists that wasn't intended to be used in a enterprise environment, and B, you know, a lot of these MLSecOps pipelines are adjacent or readily connected into sensitive enterprise data lakes, which you just don't see on the DevOps side, so it makes for a very attractive target, as a threat actor or as a red teamer for targeting ways into this MLSecOps pipeline.

So if I, you know, manage to phish my way into your org, you know, the first place I'm probably gonna go now is after your data scientist and after your MLSecOps pipeline, because I know that red or blue teams don't have a lot of experience monitoring these environments. I know that the tools within them, you know, don't have good security logging enabled or at all. I know that a lot of these tools allow for, you know, Python deserialization and code execution, and I know that the blue teams don't have experience performing incident response or threat hunting in these environments yet, so definitely a juicy target. On the flip side, within this environment, you know, when we hinted earlier at the potential for, you know, malicious models being downloaded.

Most of the models, I think, if not all, to date, in Hugging Face that are malicious, are probably set up by one or two big researchers, and they're just demonstrating the potential impact. At least, you know, a few months ago, that was the case, where really smart folks, I won't name them because I don't know if they want it known publicly. They've backdoored, you know, quite a few models for different companies to demonstrate the impact of supply chain attacks, but in a safe way. So they're not actually, you know, fully establishing C2 and being leveraged to attack the companies. They're just demonstrating that a lot of these companies are just downloading and executing models and not, you know, checking them for malware, not verifying the author of the model, for example.

So in the future, we you know, obviously see a lot of these attacks expanding to where actual malicious threat actors are starting to, you know, backdoor some of these Pickle models. So we wanna make sure that the environment is set up in a way that you have an opportunity to spot those malicious models. So, you know, your frontline controls around leveraging something like HiddenLayer or, you know, another solution to scan these models, statically and as they're being run dynamically, and as that serialization happens, you know, can we spot, you know, a C2 being established, for example? But because, you know, antivirus isn't a silver bullet, as I mentioned, you wanna make sure that you have compensating controls as well. So we wanna evaluate the logging that's in place. We wanna evaluate, you know, can C2 be established?

A lot of these environments allow outbound internet access 'cause they have to call a lot of packages from, you know, different Python packages, or they're being used to connect directly to Hugging Face. So just blocking them from the internet, you know, isn't always feasible. So we wanna, you know, evaluate what are different compensating controls that we can have in place from a logging and hardening perspective to prevent, you know, an attack like this happening in the future. You know, so we wanna assess the ability to detect malicious model code execution in notebooks. We want to assess the impact of a data scientist's or developer's workstation being compromised. We wanna evaluate the potential to access that crown jewel data within enterprise data lakes that are connected to these environments.

We wanna proactively, you know, harden the virtualization infrastructure and any shared services or identity infrastructure that's being leveraged in this environment. One of my-

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Chris, before-

Chris Thompson
Leader, X-Force Red Team, IBM

Yeah

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

... before you roll on there, do you wanna advance the slides that you're sharing, or do you wanna toggle back to the platform slides?

Chris Thompson
Leader, X-Force Red Team, IBM

Sorry, could you clarify?

Your screen has been static for what you're sharing. It is not advancing on the platform.

Should you... Do you see a pipeline security testing at the moment?

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

We see recent talks, unfortunately.

Chris Thompson
Leader, X-Force Red Team, IBM

Oh.

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

There, now it's-

Chris Thompson
Leader, X-Force Red Team, IBM

All right. And on my side-

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Now it's-

Chris Thompson
Leader, X-Force Red Team, IBM

Oh, it must have been paused. Let's see. You seeing the pipeline SecOps security testing now?

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Now we're seeing it, yes.

Chris Thompson
Leader, X-Force Red Team, IBM

Strange. Okay. Well, I'll keep it out of full screen. That's probably what happened there. So, previously, I was referring to this slide for, generative AI solutions. So the pipeline on the left, model in the middle, the platform that the model is running on, and then the gen AI application. And, right now, I was talking about the pipeline security testing. So those different, frameworks that are in use and, you know, the focus on testing the overall pipeline and how we harden it. And we've built, you know, a lot of... Oh, somebody said they were seeing the, slides advance the whole time, so it might have been on your side, John. Sorry to, to interrupt. I might wanna do a reboot.

All right, so back to this slide. So we've built out, you know, a lot of tooling with that that can help to speed up or automate some of the testing for these types of issues in these environments. You know, ability to perform model extraction from these environments or get malicious code execution, different, you know, tooling that we've used, created to extract, you know, different types of model weights and whatnot from the environment. And we're really focused on a lot of this research that can help us speed up how we conduct and assess these environments. We'll be releasing a lot of this tooling as open source in the next couple of months here, as well as with an accompanying white paper.

That brings us into model safety and security testing. I'm gonna go back to the slideshow here, and somebody please interrupt me if my slide's paused, but going to model safety and security testing, so again, traditionally, our red teaming is focused on the safety of the model. Can it produce biased or harmful content and the security model, so, you know, can we perform prompt injection, which could result in malicious code execution in the environment? Could we produce, you know, some sort of response to other users of the application that could be considered harmful, maybe steal their authentication tokens, or could we, you know, do attacks that are inherent to, you know, live models being used in these applications?

You know, looking at the different frameworks that are out there, you know, lots of different ways that the procedures and categories of attacks are being tracked from MITRE ATLAS to the OWASP LLM Top Ten around prompt injection and inferring training data. You know, all those sorts of things. Really, we're looking at how can we ensure that the models that we're using in these applications have robust system prompts that they're not gonna produce those biased responses, and that you know, the guardrails and kinda AI firewalls that are being used for the model input and output are effective in protecting you know, areas where these system prompts are not effective in preventing, say, code execution or common ML attacks.

We take an approach where we leverage automation from certain partners, we're partnered with, like, Garak, open source security tool, we leverage Robust Intelligence, we leverage in-house tooling, and the reason for that is there's gaps in, you know, any one solution, and so some of it, you know, can be almost fully automated, where for more sensitive application use, such as finance or healthcare or whatnot, you know, we wanna do more due diligence, but because sensitive backend data sets are being called, or maybe sensitive APIs are being called, that can pull data from other applications, or, you know, wire money or perform a HR action, such as viewing salary data or terminating an employee, for ;example.

So we really wanna make sure that that can't be abused, and the potential, you know, for somebody to view that data or perform actions that they shouldn't be authorized to, you know, can't happen. So a lot of the focus needs to be on how do we protect those backend calls from not happening? Going back to the AI-as-a-Service platform, my coworker, Brett Hawkins, authored a you know a fantastic white paper around attacking these ML APIs and AI-as-a-Service platforms with me. You know, a lot of attacks that can be conducted against those platforms you see on the left.

And a lot of them, surprisingly, don't even have logging enabled by default, and most blue teams haven't ingested those logs and brought them into the SIM and started to, you know, train for specific rules that are unique to these types of attacks. So, you know, we've created a lot of tooling around how do we, you know, if an attacker were to gain access to the authentication tokens, you know, the service principals, the DLI sessions, the managed identity tokens, the access tokens for access in these environments, could the blue team spot malicious behavior within them, and could they see sensitive actions being done?

So if I, as a red teamer or as a threat actor, manage to gain access to the environment, and I wanted to extract data from those enterprise datasets, or I wanted to perform, you know, malicious actions that could result in data theft or model theft or privilege escalation, you know, is the blue team prepared to spot these types of attacks and threat hunt for them? You know, and so as I mentioned, the MLOps kit will be relaunching, where we'll be launching it open source in the next month or two here. And you know, we'll enable, you know, internal teams to perform some of these tests themselves, but obviously we're available to help as well and bring some of that expertise.

Then lastly, around the apps, the GenAI apps and whatnot, that are built on top of these platforms, that leverage these models that leverage you know the models that are tuned or leverage RAG within the MLSecOps pipeline. We want to be assessing these apps for traditional application security vulnerabilities, but also now an expanded attack surface that comes with using a live model that you know you can basically you know store data within, or call, or you know try to get the model to open up you know ports on the backend web server, or open up interfaces that we can interact with. Or perhaps you know generate a malicious payload and execute it on the backend production web server or model production environment. So you know a lot of.

An expanded attack surface comes with using a live model in these applications. We wanna see that again that these applications that are integrated with, you know, sensitive API calls that are integrated with backend sensitive data sets that they've been hardened properly, and they're not subject to a lot of these new attacks that, you know, come with leveraging a live model in production. So, as I mentioned, you know, lots of different areas that we can help. We're very interested in advancing the overall community's awareness of these types of issues, and that's why we're releasing, you know, the white paper for free and the open-source security tooling and contributing back to the community.

But, if you ever need to talk to an expert or you want to look at how you can incorporate these into your testing program, we're more than happy to be available for that. So with that, those are my slides. I'll turn it back to John. John, if you're ready for sharing.

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Yeah, maybe, perfect. I don't know how I can get the uploaded slides back, showing again.

Chris Thompson
Leader, X-Force Red Team, IBM

Do you want me to share them on my side?

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

No. I guess I could fire up the PowerPoint and share through the video bridge. Unless, Ellie. They are still showing. Okay, very good. Thank you, Ellie. I will go back up, hopefully the audio clarity is better now, given that I've stopped trying to stream video from my computer. We were covering the Cost of a Data Breach, some highlights there. The total Cost of a Data Breach has gone up. The average cost, the highest industry was healthcare, $9.77 million. Obviously, there's a lot of valuable data in healthcare. One of the interesting components here was an increase in the number of organizations paying more than $50,000 in regulatory fines.

Now, I know that that number, $50,000, is a low number, but we set the bar kind of low just to gauge if there was increasing regulatory action being taken, when it comes to a data breach, and so sure enough, there was a 22-23% increase in the number of organizations who ended up with some type of regulatory financial penalty in excess of $50,000, so that seems to be moving. That trend seems to be increasing. In terms of organizations saying that they're deploying some type of AI and security automation in their SOC, that was a jump of 10% year over year, right, so more security operations centers are adopting an AI type of capability.

When you use AI and automation to do things like accelerate investigation, you know, coordinate command center activities, the time that you can take off that breach response is 98 days, right? So organizations that have those capabilities move faster when that breach has been detected. And then the savings for organizations by using AI to fill in part of that skill gap, right? So applying AI to do routine skills that are hard to find, they save $1.76 million. And then finally, the big savings, the big payout, you know, if you're, if you have automated responses, if you have AI-driven workflows, right?

The cost savings in a data breach scenario goes up to $2.2 million, which is the biggest jump we've seen in the entire report. So investing AI, the takeaway is, investing AI into your security operations capability has returns, specifically through the lens of the data breach report. Again, it's available online, where you can download the accuracy out of generative AI results. He talked about bias, so on and so forth, so we're not gonna do any more on that 'cause Chris covered them. We'll move on to the flip side. We're not talking about blue teaming, but talking about adversaries. You've got new types of...

Higher sophistication attacks, like better deepfakes, where you have financial controllers who are being deepfaked on video calls and releasing transactions that should never happen. You've got generative AI code writing tools out there that can generate malware. So a lot of those risks, so we won't double-click on that as well. So when we've covered businesses adopting AI, the use of AI in that data breach type of model to accelerate detection and response and the payouts that come from. Chris covered a lot of those risks and how you go about testing for those. Let's talk about a little bit about AI specifically in the discipline of security.

So a lot of those risks that we were talking about are whether it's being used as a customer service bot, whether the AI is being applied, like Chris said, for some back-end processing. But let's talk about the AI specific to security. So we have been using AI in our cybersecurity services platform for years to do things like look at alerts, and we've automated 85% of alerts, right? To accelerate investigation, to look at an inhuman number of indicators and apply threat intelligence. And here's an example of. Unfortunately, I don't think the roll-in happens, or maybe I can get it to roll in here. I can get the slides to roll in.

Some timelines and some volume metrics around when we use AI in our threat triage, and threat handling function, some of the returns that we're seeing. But given that we're running short on time, let me fast-forward and talk about how we're thinking about multiple types of AI, right? There's not just. It's, it's interesting, in my opinion, that the kind of transformation that security teams are going through. They have AI capabilities for discrete functions, and this is a lot of what you see here on this slide, our kind of strategy as it comes to those discrete functions.

But we also see organizations looking to create a single interface or at least a single API to put in front of all those functions, and so more to come from us, maybe we'll cover that in a future webinar, but a strategy to stitch all these things together and the way I'm showing our kind of asset North Star here to you is using an old before the breach and after the breach. You know, how can AI play a role in many of these, some of these capabilities, like our advanced threat disposition scoring system that's been around for eight, nine, ten years, and we've used machine learning in that, and now we're stitching in generative AI capabilities.

Other capabilities, other assets that we're building here to help visualize adversary behavior or apply threat intelligence in a predictive way, or threat detection insights that does content engineering using generative AI. So using generative AI to create detection rules automatically and publish those into a blue team environment. And then finally, you know, at the end of the on the backside of the boom here, we've got our cybersecurity assistant that supports investigation and response capability. So a lot of upskilling of a SOC analyst, you know, around on the blue team side here, but definitely a lot of work before the boom, where Chris lives and the red team, and ensuring that security is implemented in a way that will protect the organization. Last two slides here, just to double-click some of the value statements.

I know we're running out of time. We've got about a minute left, so if you want to grab a screenshot of these, I'll pause on each of these slides. What you're gonna see is those assets, those AI assets that we're developing, that we're continuing to evolve and investing in, with a client success story, an anonymized client success story, and you've got some quantitative value in terms of the impact that that technology is making. So we're showing you, we're trying to be as transparent, you know, being sensitive to our client identities, the impact of that AI, those assets that I showed you across that kind of threat management pipeline, right, that they can make.

So there's this one, and then there's this slide as well, too, that also talks about outside of security operations, using AI. You'll see in the upper right corner here, to help automate compliance, or, for example, to create a new, generative AI-based identity and access management experience, right? So gone are the days of the web forms and the access review websites. Managing identity and access is now as simple as talking to a generative AI bot, right? So, those are other areas of the security program that we're working to transform, those operations using generative AI. All right, so I don't think we're out of time. I will toggle back and see if there's any other questions that have rolled in.

I don't see any new questions, so, Chris, unless there's anything that you want to add. Guys, I apologize for the audio issues. Truly, that would-- that is a one-time event for me. Usually, it's rock solid, but again, I apologize for that. Chris, anything you want to add before we shut down the webinar?

Chris Thompson
Leader, X-Force Red Team, IBM

Just, if you have any questions about what we chatted today, don't hesitate to reach out to myself or John, cthompson@ibm.com, or you can reach us on LinkedIn. Appreciate your time, and we'll hopefully be in touch. Thanks, and take care.

John Villacis
Product Manager, Cybersecurity Services Threat Management, IBM

Thank you. All done.

Powered by