Fighting Fire with Fire: Chief AI, Innovation and Data Officers on How to Combat Escalating Levels and Types of Cybersecurity Threats and Fraud

Cyber risk-related fraud in financial services is increasing in frequency, sophistication and type of attack at an alarming rate, in part due to the increasing usage of artificial intelligence by bad actors.. By 2025, some experts predict that the costs from global cybercrime could reach $10.5 trillion annually.The good news: AI is being used by banks and financial institutions to fight fraud. The panel outlines a modern-day playbook for using AI to reduce fraud across channels.


Transcription:

Ron Shevlin (00:10):

I want to thank you all for coming out. My name's Ron Shevlin. I'm the Chief Research Officer at Cornerstone Advisors, and I am privileged to be joined today by Carl Eberling, the Managing Director, Juris Banking, Western Alliance Bank, and Thomas Mazzaferro, the Chief AI Data and AnalyticsOofficer. And I might have bungled the order of that, but I think we're in pretty good shape.

Thomas Mazzaferro (00:31):

You're good.

Ron Shevlin (00:32):

Of Truist Bank in the southeast. Our topic today, and first of all, I can't imagine that any of you who work in a financial institution are not experiencing an explosion in fraud scams, cybersecurity issues. Many of them spiked up these days because of new artificial intelligence technologies. So we're going to talk today about fighting back and how to use AI to fight AI, fraud scams, cybersecurity issues, and so forth. One thing I want to just start with before we get into the meat of the conversation here, guys, is that about a year ago I posted on LinkedIn, I vowed that I was never going to use the term AI ever again. And the reason for that is because it's an umbrella term that relates to a lot of different types of technologies, and I felt that by continuing to use the term AI, we were kind of washing over what the specificities and differences in those technologies were.

(01:34):

And so I think about 24 hours after that, I completely broke my bow and I've been using AI as a term ever since. But when we talk today, guys, I'm going to press you for if you say AI, what do you mean? Are you talking about machine learning? Generative ai, you talk about agents, robotic process automation, conversational AI. I'll ask you to kind of get into the details and I think a lot of it will relate more to machine learning, but we'll ask you to kind of get into some of the details of that. So again, listen, I don't think we have guys, we have to convince the crowd here that there is a fraud cybersecurity issue. We've got DeepFakes, we've got synthetic identity fraud, we've got insider threats. Carl, I'm going to turn to you first since you're sitting directly to my left, please talk about what your organization is doing with artificial intelligence tools and technologies to fight AI driven fraud and cybersecurity issues.

Carl Eberling (02:35):

Sure. I think first I want to make sure everybody can hear me in the back, right? You just, okay, because I'm going to battle against the machine. So I would say what we're doing really is about four years ago I joined the bank first actually in a CIO role. So running the IT organization, I wanted to really go after and make sure that we had a sound program around just log file monitoring and might sound really simple, really stupid, but to your point, machine learning is such an important part of this because if you aren't watching those log files, if you aren't interpreting indicators of compromise stuff that might be going on at that level, then you're really working encumbered from the very beginning. And so long story short, going after that log file intelligence allows us then to better understand what really is going on regardless of the channel, whether it's directly at a transactional level within say the core, whether you're looking at log files coming off the call center stack, so your telephony stack to actual activity, then that's coming in through the web browsers and through your H-C-T-P-S, whether it's mobile or desktop.

(03:49):

And so getting that intelligence and having that first as a moat, if you will, around the activity. Super important.

Ron Shevlin (03:56):

Now, just to drill down a little on that, Carl, when you came into the organization, did you find that they were not tracking the log files? Not, and how did you need to, is this a specialized group, people that are doing this now, and what do you do with that data? How does it flow from a process perspective?

Carl Eberling (04:14):

Yeah, two parts. One was, I shouldn't say there wasn't anything. We certainly had a security team that was looking at it specifically for login events, that type of thing, but it was a bit myopic. And so when we expanded it, I started with the argument that this was really about availability. And so when you look at availability, reliability, you need that information, you need that data, and it certainly broadens the horizon quite a bit. Once we broadened that and opened up the aperture of what the threat surface really was, allowed us to really kind of attack it entirely differently to where today it's important tool in the AI.

Ron Shevlin (04:51):

So when you say you attack it, what does that mean?

Carl Eberling (04:54):

It means when you take these, well, a variety of things. One is you have to be talking to other people about what threats they're seeing, and that can go from the regulatory stack, whether it's the FRB, the OCC, it's also talking to the FinCen community, talking to this, what is it? There's another one that's for the finance community as well. And then to the peer groups, making sure that you're identifying the threat vectors that are coming in and then developing those indicators of compromise to help speed up and allow this system to detect the threats that are out there.

Ron Shevlin (05:35):

So when you say peer group, is this something you've established a group, or is it something already established?

Carl Eberling (05:40):

It's more ones that we've established ourselves, so probably about 15, 20 or so different banks that our security group is regularly talking to, and then certainly interested through that fiend community to talk to anybody else.

Ron Shevlin (05:57):

So for folks in the room, are you open to bringing in new banks? Should they contact you about getting part of that?

Carl Eberling (06:03):

Yeah, yeah. To me, this is not a competitive thing in terms of how we're doing business, but rather how we strengthen our posture against the threats that are out there.

Ron Shevlin (06:13):

So before I move on to Thomas, where's the AI or machine learning component of what you're doing with the log files? With the,

Carl Eberling (06:21):

Yeah, great question. Once you have that structured data, then looking at it from an AI perspective, you can feed that into your LLMs, which everybody gets excited about the LLMs when we talk about ai. But having that data in a way that you can ask intelligent questions because I don't care whether you're making a loan or whether you're trying to protect the front door. You're fundamentally looking at how do this information, what am I missing against the models that I look at today? Each one of those IOCs that your security teams create, the indicators of compromise are really just predicated off a model that says, if I see this, it's a threat. Right? Same thing with your data pattern on your loans, right? If I fit this particular credit underwriting or underwriting, then I should get this kind of return. The concepts are not dissimilar in any way, shape, or form.

Ron Shevlin (07:15):

Great. Thanks, Thomas. Same question to you in your organization, how are you using AI to help fight some of the AI fraud?

Thomas Mazzaferro (07:24):

So let's start off with what we're seeing in the market from a trend standpoint, talking about what we're doing about it. So CNBC had a speaker on maybe about a month ago, and they talked about seeing that the top 500 companies in the us, 70% of their executive team had a deep fake on them. 70%, that's a massive number. Think about that. A deep fake or a type of AI attack on their corresponding executive team. So for us, we're doing a number of different things. Number one is we're partnering with a few different solutions that allow us to scan those deep fakes and those overall attacks and actually take them down, stop them, remove them so that we don't actually have that corresponding risk out there. Going forward, that monitoring is taking place and making sure that we basically protect ourselves and our executive team and our corresponding committees from an overall visibility of being able to identify the triggers, the alerts for us on logs, logs really important.

(08:32):

But also around all the different threat factors coming in. How do you make sure that you can scan and know what's taking place across the entire ecosystem? That's a combination of a couple different things. One's the number of cyber solutions that actually use AI today to do threat detection, whether it be from a network standpoint, whether it be from a standpoint of understanding where your data actually sits. So how do you know what to protect? You need to scan your systems, scan your assets, understand where are your critical datasets today. For me, that's really important. If you do detect something that's happening coming at you, you want to understand what are they attacking and what's your exposure. So we're using AI not just to detect, but also to scan our entire environment, our entire ecosystem, both on-prem and on cloud to know where our data is, what our data actually is, what's the risk of it sitting there, and we want to do proactively to basically protect it going forward. So from my standpoint, AI and cyber risk and data risk all come together, right? Both from a detection standpoint, but also from a proactive standpoint. I'm being able to identify and then lock things down accordingly.

Ron Shevlin (09:47):

So when you say you're using AI to scan the environment, what does that really mean from an operational perspective? Is there data you're collecting? Carl alluded to large language models. Are you feeding large language models? And if you are, how do those models differ from the predictive models that banks have been using for 40, 50 years now?

Thomas Mazzaferro (10:10):

So when you think about scanning results and you think about AI, let me provide the difference. So machine learning models have defined inputs, defined logic, and defined outputs. LLMs or gene AI models have undefined inputs, undefined logic, and undefined outputs. So think about scanning and identifying things across your environments. You want to make sure that it's one what's going in Neur logic's being used, it's going out. So for us, when you do scanning, it's all about machine learning. How do we make sure that we can scan across all technologies, all platforms, all environments. So we know what's actually there, we know how it's being protected and if we need to remediate it or improve it to do so. The second piece is around, I'll call it detection. Detection coming at you in many cases is undefined, different attack vectors and so forth. Being able to then use a gen AI based solution to identify is that corresponding traffic or is that corresponding volume different or differing than your normal patterns? That's where we use more Virginia I suite to think about the different capabilities there.

Ron Shevlin (11:33):

So as you started building those models, what was the hard part? What did you have to struggle with? How did you overcome the hard parts of building these new models?

Thomas Mazzaferro (11:45):

So when you think about deployment, what's the most expensive part of any deployment? The model training, model development For these situations, we would only see the traffic and the patterns for our bank that actually is very small in terms of different patterns. We actually don't think that's valuable for us. Train our models on our data. You'd rather actually partner with a vendor or a solution that has trained their models on multiple banks, multiple traffic, multiple patterns, leverage those solutions and put it in place because it'll allow us to be way more improved. Now we have multiple different patterns that are being learned to the corresponding model that we can then apply then use for our bank. And as new vectors and new threats come towards us, we have a much larger net of protection going forward. So we think about cybersecurity, we think about scanning.

(12:54):

We really are in the path of partnering with different solutions because it allows us to have a much larger landscape of the industry and provide a much better pattern for our job. Our job is to take that solution and then integrate it. Once you integrate it, great, now you have it plugged in, how do you then action it? Right? Ones around the alerts. So what's taking place? How do you have visibility and transparency, but more importantly, what's the action? What action are you taking? I think that's the next piece in my book. When you think about where the industry is heading on this type of domain, it's around as you see different alerts and as these different triggers, how do you automate the extra next steps that can take immediate action rather than alerting someone, getting 'em onto a bridge now wasted five minutes person on the bridge reviewing the alerts, so 10 minutes, then determining what they want to go do and putting it in place. The time that's done, you wasted a half an hour versus taking the alert, the trigger already having the automation of what should be done and defined and having it automatically go and execute. That to me is where the industry is going over the next couple of years on this because it allows you to then take care of the threat actor or to care of the control threat immediately rather than having a delay in wait time going forward.

Ron Shevlin (14:24):

Carl, I want to come back to you and I still want to drill down a little bit more on the mechanics of some of this, but look, models since the day we ever started developing of them have struggled with bias because of the data input. So two part question here. One is, how are you dealing with the inherent biases, some of these models, and what are you doing from a data collection perspective and what types of data you're using to train the models to help deal with the bias?

Carl Eberling (14:52):

So I want to start with the second question first. It comes down to data obfuscation. I think Tom hits on excellent point where existing LLMs are going to have the entirety. And so when you bring that in, you have to make sure that the data sets that they're looking at of your information are not getting publicly fed back into the LLM. They're just data privacy and records concerns you have there. So first, the analysis that we're doing is typically on our data sets that are in our lower environment. So we take that data set from production, it's getting obfuscated in a lower environment and then being fed against the model, but kept inside the protected walls such that we can then influence the model. And so that then brings you into the second question, which is this question of bias. And this is really, I think this is the one everybody's grappling with because what constitutes bias versus preference?

(15:50):

What constitutes risk versus risk acceptance? They're just two sides of the same coin. And so coming back in, building in then these new indicators back into your models and then running through the normal risk and compliance model or processes that you would have in an organization. So you wouldn't put in a new risk model or a new KYC model that hasn't already gone through, the risk department hasn't already gone through the compliance department, the legal department. And so same kind of concept has to happen here. As you garner intelligence off of potentially new unidentified patterns, you've got to feed it back through that cycle start to finish. So none of those other steps go away. You just have now potentially new intelligence to help battle.

Ron Shevlin (16:43):

Great. Thomas, same question to you, but also would love for you to touch on sort of the cross channel aspect because that's a huge issue for a lot of financial institutions is dealing with collecting inputs from various touch points. So how are you dealing with the bias? How are you dealing with the obstacles to cross channel data collection?

Thomas Mazzaferro (17:06):

It's interesting you talk about bias. In my view, I have never seen a model that has no bias, that model that way. I've never seen a model that has no bias ever. No matter what model you use, there will always be bias.

Carl Eberling (17:22):

I am biased to people that actually pay their bill.

Thomas Mazzaferro (17:27):

Absolutely. There will always be bias in every model that you use. From my standpoint, the question is really how do you minimize the bias? And two is how do you make sure that the model is performing the way that you expect? That to me is a more important question. So how do you have the right guardrails in place that when you deploy and implement a model and a solution, that you have the right capabilities to review the outputs in near real time to make sure as executing, making decisions and actually driving that process, you are ensuring that the output is in the thresholds and the garros you put in place. That is the only way that you can maintain and minimize, I'll call it negative bias. Because you think about bias, you hit upon it. Carl, I have bias where I want to give more credit.

(18:32):

People that pay their bills, I don't have losses then, right? But that is bias. Negative bias is do I want to give more credit People that don't pay their bills, probably not because my opportunity for losses are higher. It's always a risk-based decision accordingly. So when you think about models, we think about bias. There is never a model that's ever been created in the history of analytics and AI that doesn't have bias. It's all about how do you implement it, how do you put the threshold in the guardrails in place, and how do you monitor to make sure that it is always performing the way you want? The question then becomes what happens when it performs in a inappropriate way or it goes outside the thresholds? Then what do you do? One thing we're on believers on is when you deploy a model in production, you should be having another model that is being trained is something that we call a discovery environment in parallel with the new data coming in, the data coming in, model champion challenge or model you to be trained in advanced.

(19:40):

If that model in production goes outside the thresholds, you then assess the one that's been trained in training in parallel. You go and do that from a model review standpoint and then go and deploy. But you should always have that champion celler mindset always, which is super hard to do, not because of technology, because of human behavior. So if someone goes and spends months upon their lives building out models and the employees say, oh, our job's done, we did it, we're live. But all of a sudden they're like, oh, I have this model in discovery in parallel and the next day it's better. Well, what do you mean? Are you calling my baby ugly? No, we're calling it a little bit less improved than the one we have in place, but as a human behavioral piece here that actually we need to change the mindset to help them understand that it's not about not being proud of your work, but rather being proud of putting in place sustainable processes that will continually improve as your business process and services evolve as a company.

Carl Eberling (20:53):

Any sailors in the room? Any people that sail?

Thomas Mazzaferro (20:56):

I'm a lake boat.

Carl Eberling (20:58):

Okay. Or speed boaters. I always say, because sailor, it's all about the journey and that's what he's talking about. You've got to be on that journey. Speed boaters are about getting to the destination. Sailors get drunk on the way there. Speed boaters get drunk once they get there.

Thomas Mazzaferro (21:15):

You're absolutely right. But it's funny, technology isn't the problem here. Actually bringing the people along and changing how they think about their work over time.

Ron Shevlin (21:26):

Alright, so gentlemen, you've both been doing a lot of great work at building the models, collecting the data, fixing it all. So really the million dollar question is, Thomas, I'm going to go to you first on this one. What impact is it having on fraud reduction prevention and how do you know you're having that positive impact?

Thomas Mazzaferro (21:51):

So a couple of things. Number one is every day the bad guys, the threat actors are improving their corresponding capabilities to try to get more money, make more money. So if you're doing the same on your side, then you're not competing on that same plane, right? That's problem one. So you have to always continually be improving as you go forward. The second piece around, well, how do you make sure that you're driving value? Well, everything that we deploy, everything that we put in place should have an ROI in place. Whether it's a soft dollar or a hard dollar, there should be an ROI aligned to it. If there's not an ROI aligned to it, then stop doing the work. People like, well what do you mean? This is really important, I want to make sure it's being done. But no, you should be focusing your time and your effort on things that have meaningful ROI. And if you can't define the ROI, you just step back and say, is this really the best use of my time? And if not, pivot, change. When you think about then the driving the benefits is the ROI should be telling you that and the monetary between telling you that. Are you meeting what you committed to? Are you executing and hitting that ROI? And if not, what do you need to change and modify to get there?

Ron Shevlin (23:18):

Carl, same question to you. What do you see as the impact you're having on the negative impacts?

Carl Eberling (23:27):

I think there's certainly the measurable KPIs that you've got up and down the stack. And Tom just touched on a lot of 'em in terms of the ROI piece, but then you're also just the threats that you cut off and it's kind of funny, it's similar. How many of you have had that procurement person that says, oh, I saved us a hundred million dollars, even though you only had like an $8 million budget and you knew you were never going to spend 108 million, but they touted it. You have to be careful of that. Sometimes on the security front too, we stopped 14.2 billion bad guys or gals on the perimeter and we only had four compromises. Everybody's going to focus on the four compromises, right? So it doesn't matter how many times you stopped everyone else. And so overall it's going to be that reduction in losses out the door that people are concerned about, and then of course reduction in the number of times that you're on the cover of Times magazine and saying, oops. So you've got to prevent both the reputational risk and then the actual financial risk.

Ron Shevlin (24:30):

So those are the measures, but alright, what's the impact? Are you able to now, thanks to your AI efforts, go back to the management team and say, here's the impact?

Carl Eberling (24:40):

Oh 100% we can go back and show. And particularly where you wind up, I think getting your biggest bang for the buck is going to be the time to shut something down. So once you have a penetration that happens or you have a risk that comes in because it is going to happen. Anybody sits up here and touts that they're going to block everything and nothing will ever get through is just lying. But the real question is, once it happens, how quickly did you identify it? How quickly did you shut it down and prevent incremental losses? And it, it's a shitty way to do business, but it's what has to be done because like I said, you can tout all the times. You prevented it before it even came in. No one really cares. It's really about how quickly did you respond to the problem once it came.

Ron Shevlin (25:31):

So look, I don't want to be a jerk, but so how much faster are you able to identify and prevent thanks to your AI efforts?

Carl Eberling (25:41):

Oh, I'd say we're in the minutes category of something happens. We're able to shut traffic down. We're able to identify the actual risk and develop these IOCs.

Ron Shevlin (25:54):

Minutes as compared to what beforehand? Hours, days,

Carl Eberling (25:56):

Days.

Ron Shevlin (25:58):

Same thing.

Carl Eberling (26:00):

Similar. Awesome.

Ron Shevlin (26:02):

Okay. We've got a few minutes left. Yeah, just a few minutes. And so last question here. I want you guys to look forward to the next three to five years, I would say five to 10, but that's way too far out the next three to five years. What do you see coming down the pike from a generative ai, from an agentic AI perspective that you're looking forward to incorporating into your fraud prevention, detection and management efforts?

Carl Eberling (26:29):

For me, it's going to be augmented human in the loop because the human in the loop today is still your weakest link. You talk about deep fakes and bias in particular, we have a bias if you're in the customer service field, you like to provide service and you're going to be sensitive to the idea that adding too much friction into the process is pissing that person off for customer service. So it's in our human DNA to want to compromise to want to get to the answer that the person needs. And so for me, it's going to be augmented human in the loop where the AI is that whisper agent. It's that thing that's sitting in the background identifying something's not right and enhancing your little shoulder angel that says, Hey, something's going on here. Maybe you should add some friction to this. I think that's going to be the biggest change for me.

Ron Shevlin (27:27):

Are you saying then that the tools you're using today are not adequately enabling the human in the loop aspect?

Carl Eberling (27:34):

I think the tools we're using today are meant to work in the background and ensure less friction and they're the scaled automated pieces as opposed to a listening agent that's actually hearing the tone of what you're saying, hearing the rhythm of what you're saying, maybe even interpreting whether you're going through a voice filter when you come through to see is it really you? So to me, it's enhancements to what we have to really address the deep fake piece because I think that's going to be an increasingly difficult problem to solve.

Ron Shevlin (28:11):

Got it. Thomas, same question to you. What do you see coming down the pike from a generative and engineering AI perspective that you're looking forward to using and incorporating?

Thomas Mazzaferro (28:19):

I'll give you two examples. The first one is how we build and engineer solutions will change with what's happening right now with the ability to do vibe coding, actually having the domain expertise and being able to use a chat like feature and say, I want to build this type of solution, this type of integration, and the automated code that's already been optimized then provided to you. I think engineering and software delivery is going to get largely transformed in the next two to three years where the domain expertise will actually be the actual talent you're going to be hiring because the actual coding and engineering work will be largely automated. We'll scan before we deploy, but it'll be largely automated. It will shift technology teams on its head pretty quickly. That's the first piece. The second piece, maybe outside of banking more holistically, I think it's going to change how we live in many ways. You've seen it already. It's happening today with life sciences. The ability to identify cures, vaccine medical capabilities is largely accelerating across native fields. That's what helped all of us and our families going forward. It will be quite amazing.

Ron Shevlin (29:38):

I guess we're done, guys. Thank you. I want to thank you guys. That was the fastest half hour of my life, and I hope it wasn't the slowest half hour of your life, but please join me in thanking Carl and Thomas.