Track 4: The latest highlights of AI

AI is one of the key technologies named in the Everest Group's recent "Risk & Compliance in BFS IT Services PEAK Matrix® Assessment 2023 " named to help BFSN firms to manage end-to-end risk and compliance objectives. This session will focus on advancements in artificial intelligence relating to fraud and security in payments. In relation to other digital technologies, how important is AI today versus the other technologies. And, given the ever increasing complexities, what new examples of AI should be considered in the area of fraud and risk compliance.

Learning Objectives

1. What the latest advances in AI for fraud and Security

2. What are the new regulatory concerns and challenges

Transcript:

Presentation Narrator (00:09):

Hi everyone. Thank you for coming. We are going to do track four, the latest highlights of AI and I am going to pass it over to our moderator, Tim Chambers.

Tim Chambers (00:20):

Thank you.

Tim Chambers (00:22):

Good afternoon everyone. Thank you for joining us this afternoon. My name is Tim Chambers. I am the head of Omega Ops at Mission Omega, a mission driven fraud services company. I have right at 30 years in financial services about give or take, 28 of that was in-house at financial institutions where I did everything from fraud governance, fraud risk management, all the way through to leading worldwide fraud operations teams for many years. I am joined on the panel today by Kevin C Slack of Wells Fargo. Kevin is senior vice president of strategic planning and execution. In his role, he leads the strategic planning and capabilities for enterprise fraud analytics, where his team is responsible for identifying, developing and delivering strategic solutions for AI, data analytics, automation and fraud detection across retail, digital, and the voice channels. Also, with me today is Paul Stockton of NTT Data.

(01:31)

Paul is the consulting manager for the fraud risk and compliance practice. Paul has dedicated risk management professional with over 20 years experience solving strategic and tactical issues in fraud and financial crimes. As fraud is an ever-changing phenomena, he works closely with his clients and leading FinTech providers to ensure their approaches to risk are durable and holistic. So thank you both for joining us today in kind of leading off this topic. You all probably saw in a little bit of the narrative on the website that AI and a study earlier this year was identified as one of the key technologies to be used for risk and compliance activities and financial services. So we are going to spend our time talking about and focusing on how that is going to be applicable and can be applicable in the fraud and security space. One that is ever growing and for a number of different reasons that we will get into.

(02:32)

But as I kind of thought about how do we think about this in banking and all, I kind of stood back and said, boy, whether it was make believe or whatever you want to call it, AI's been around for decades. How many people remember how 9,000 from the sixties movie, 2001 a Space Odyssey IRA in the 1970s Wonder Woman TV series and then probably more recently Ultron in the Avengers franchise. So I think that where it is become more prevalent is just obviously over time and that the speed technology moves now the evolution of AI, the capabilities, and one of the things that I think is critical for us to all remember is right, there is two sides to this game. The industry is looking at how to use AI and the fraud and security space and as fraud fighters, how we can use it to secure customers, secure the institutions, but also we have to remember the fraudsters are also leveraging AI to beat all the defenses at the banks and what the banks have to offer.

(03:52)

A couple of examples you may have seen recently, there was a LinkedIn article that spoke to generative AI being used to commit check fraud. Also, there is some AI related to voice cloning that is being used unfortunately to mimic, think of kind of parent, unfortunately the scam on steroids, but to clone voices of loved ones to then get you to give out credentials, send wire transfers or whatever the case might be. So it it is that continuing ever-changing battle. So I thought to lead us off though, and Paul I am going to ask you, I think it may be good to do kind of a landscape level set here because more often than not in my career in recent years, there is a lot of people that throw around AI in the same conversation with machine learning, robotics and also straight through processing. So can you take a couple minutes and just kind of help us understand what is AI and what it is not?

Paul Stockton (04:57):

Absolutely. And I think really that is the perfect place to start in talking about AI as well. That idea that we have got so many new technologies, in some cases new variance of an old technology, they kind of get commingled with each other. But I think that layered idea of exactly what these technologies are is really important for every institution to understand. Every company really has kind of a straight through process today, that happy path of what you do, whether it is using your people, your technologies, your processes to make sure that you have got the best customer service, the best product line out there. You are able to remedy risks, recognize them, predict risks as well. But there is a layering to it. You have got these straight through processing, the robotic process automation, machine learning, and then finally the AI component. Kind of think of them in terms of a foundation, a of an acceleration.

(05:59)

Once you get up to machine learning, you are talking about now that predictive capability. It is that final layer of AI that you really start to think about what the alteration to your process is going to be. That is sometimes where people start to get a little frightened. You do not want something altering your process without you really being able to control it. The analogy is thankfully that my wife tapped me on the shoulder the other day and said, hey, walk me through this. What do you mean by this layer? Said, you know what? Let us take the Roomba said, okay, we have got a Roomba. There is an input and an output. The input is I have a Roomba. The output is I want a clean room foundationally straight through. I could jam a broom stick in the top of the Roomba and push it around the room and I would have a clean room.

(06:46)

Eventually it gets the job done. Let us add some robotic process on top of that. Now the machine is turning itself. I can get rid of the broomstick. It turns, it stops, it backs up. It does the things that a Roomba should do. And in the machine learning now it is actually mapping the room. After a while, it gets pretty good at understanding where all my furniture is. It can avoid things. It can predict that is near set of stairs stop itself back up. Wonderful things to be able to do. There is that next level. That thing starts to get a little scary, which is where then all of a sudden it says, what if I had additional information that instead of saying your input is you have a room by in your output is you want a clean room. What if I decide I can't give you a clean room and I am not going to, or why not?

(07:38)

Because I know you also had a dog and your dog is out of the crate and your dog might have left something as a surprise on your hardwood floor that you do not want me as a Roomba sweeping all around the house. It stops that flow of input output and good AI recognizes the inputs that are good in order to stop that flow. It is the bad AI. It is those troublesome issues that we want to try and get in front of, understand and decide exactly what information we should take in, what information we need to leave behind to get the output that we actually want.

Tim Chambers (08:18):

Yeah, no, and I think it is a great kind of way to lay it out in thinking about how a piece of technology works in a Roomba, right? Because to your point, it has To do all those different things. Right, exactly. And where will it go in the future?

Tim Chambers (08:33):

So Kevin, I wonder from as we think about fraud and we are all familiar with the fraud life cycle. We have prevention, detection and resolution.

Tim Chambers (08:43):

I would be curious from your perspective in Wells Fargo and exploring the opportunities for AI in fraud mitigation and how do you identify where in the life cycle is the place to start? It may or may not be at prevention. Maybe that is based on where AI is today or maybe your own internal systems and data. So how have you looked at where is the right starting point?

Kevin C Slack (09:15):

Well, we have really focused on where are their problems and there is applicability for AI at all the different steps in the life cycle. I mean detection, prevention, certainly the fraud customer experience, and how do we use AI to really create a better fraud experience for good customers? Meaning when good customers have fraud on their account, what can we do to make that experience better for them? How can a model predict, predict what are the steps that we can take to remedy that customer quicker, make it more seamless? That whole unfortunate journey that they have to go through when they are compromised. And so AI is not just about detecting the bad guys or preventing bad things from happening, that there is certainly huge applicability for it there, but it is also for use for good as well, which is interesting. You do not often hear it characterized that way in the media.

(10:35)

AI is scary and it is going to take over the world and replace all of us in this room here. And some of the learnings that we have really had is that as Paul touched on, AI's only as good as the data. If you do not have the data, AI's not magic. It is not going to know what to do about what the dog left behind a human. It needs the data that there was something left behind and a human needs to have told the model what to do in that circumstance. It sounds like not clean, not run over it or clean the room. So it is having the right data and having the right really, for lack of a better way to put it, the right human inputs to AI are critical.

Tim Chambers (11:36):

So let me ask a follow up on that cause I think it is very interesting, Kevin. Is it really AI a benefit to fraud operations? Excuse me. Is it a benefit to customer experience? I mean, is the driver efficiency? Is it fraud mitigation, right? Or can you solve a number of those things? Maybe not out of the gate, but over time and experience

Kevin C Slack (12:06):

You can really solve both or all of those. I mean certainly an AI model can help us, as I mentioned, if fraud has occurred on a customer, can the model identify all the sort of parameters or dynamics of what happened and just remediate the customer if it means crediting their funds back or doing whatever it takes to make it right for the customer? The model based on data, and again, it all comes down to the data. The model is just looking at all the available data and saying how do we make this right for Tim? And it is able to do that in a way that typically it takes more time for human agents to do. So the model is able to look at a broader set of data more quickly and with a broader perspective and make a better decision that is more satisfying to the customer, more seamless for the customer. And ultimately, again, it is being used for a good purpose to correct a wrong. And so certainly from an operations efficiency standpoint, tremendous applicability there. But then on the bad detecting the bads mean AI can find patterns and things and can really be powerful for preventing losses, avoiding losses, but again, it is only as good as the data.

Tim Chambers (14:07):

So Paul, in that same vein, you work with a lot of different customers across probably multitude of industries as well. Excuse me. Just be curious from your perspective and have a broader brush, so to speak, of the industry. Where are folks looking at the use of AI more from an operational efficiency or pure fraud detection or how are they thinking they want to go about it and where they are going to get? Because at the end of the day, and Kevin, I am sure you guys had some of this, right? there is got to be a business case to be made to have the spin for sure, right? And so is that loss reduction efficiency of operational staff, customer satisfaction?

Kevin C Slack (14:48):

Absolutely.

Paul Stockton (14:50):

And yeah, I think it is really all of the above. A couple of really interesting cases where I think AI can be used. If you think about the IVR experience today, great things have been done with natural language processing. You do not need the specific verbiage, you do not need the specific language to pinpoint precisely the problem that you are having with your bank or your retailer. All you need to be able to do in your IVR is not press a button, which corresponds to the avenue that you need to go. You can just explain, hey, here is the issue that I am having in natural language, plain spoken tools with AI, a lot of those tools can now be leveraged to not only take the nuances of those questions and route you to the right places, but also provide you feedback that you can understand a lot more readily. I think a lot of times the old push button format, press one for this, press two for that.

(15:49)

It truly felt robotic. It was not natural. People did not enjoy interacting with it. But now with an AI enhanced experience, IVR becomes a lot more seamless. You take more people out of the equation, not a negative way, but you take them out of the equation of having to decipher what that need is. AI does a tremendous job there on the cost savings side of that. You can of course see how less time spent within those queues translates directly into dollars. that is a wonderful thing. We want to make sure that if you are able to apply AI to actually streamline a process, make people more comfortable, your product improves, the customer experience goes up and there is a bottom line, bottom line benefit to it as well. We do see it in operational control too. Whenever I think about, especially fraud risk, always have in mind that traditional bell curve, you are looking to see where the upper limits of fraud are, where the lower limits are.

(16:52)

Somewhere within there you are dealing with something that is pure fraud. You are dealing with maybe some waste, you are dealing with some abuse. If it is an abusive practice, it might not be something that you want to treat with a broad brush and say, every abusive practice, I need to get rid of that account. I need to cut off that credit line. You may simply take that as an opportunity to say, AI has helped me recognize this pattern and that there is something that I can do to actually tweak the product instead of managing the customer's actual behaviors. I think use cases for AI, being able to really segregate those matters of fraud, waste versus abuse, really beneficial as well. Even when it comes to healthcare retail practices, the fraud across the retail industry right now is rampant and rising. We recently read through the national retail that for the first time, everybody can clap if you want to. The amount of retail returns in the US exceeded the budget for the defense department. So everybody who goes to return things, you bought your three pairs of jeans, but you are only going to keep one of them a hundred and some odd million of that have of us have done that this year and those dollars are real. And being able to leverage something like AI to predict, hey, the repurchases were just made and we can expect two of them back. It is a very beneficial practice. It is not fraud, it is somewhat wasteful. It may be abusive, but AI can, I think really help lead the way in helping us decipher those things before we treat with the broad brush.

Tim Chambers (18:38):

And I think one of the things you kind of touched around is, as I thought about it in my prior role at when I was in-house at an institution and as I talked to folks, there is a lot of conversation and trying to think through the regulatory concerns, the kind of control concerns. And I know, we kind of poke fund earlier where AI is in TV and movies, but I think it resonates in this space because if you go back to the how 9,000 you have the, I am sorry Dave, I can not do that. Right, and so I would be interested in both your perspectives on a number of things come to my mind when I think about the potential regulatory aspects, performance monitoring, change management, audit, trail tracking, and obviously we all know everything. Regulation will catch up at some point. However, everybody is trying to figure out how to use it today within the current regulatory construct. But I do think things can come to mind for me. So AI came up in a conversation earlier today relevant to complaints, management data,

Tim Chambers (20:00):

And those kind of things. I also kind of think as we talked about, if you used AI in claims management as an example, how do you ensure you do not unintentionally have unfair practices or disparate treatment in claims decisioning? Right? So I would be just interested, Kevin may be from your perspective, since you all were looking at AI and then Paul kind of your perspective more from a risk compliance.

Kevin C Slack (20:27):

Yeah. Well I will just say we have not solved it yet. I mean the regulatory compliance when it comes to AI models is it is new territory for the regulators as you pointed out, that we are not there or they are not there yet. But on our side too, it is not there yet either in the sense that the benefit or the beauty of AI is also what makes it so challenging and the fact that it is able to find and detect things and find opportunities and that we can't telegraph. And so from a regulatory perspective, we are expected to be able to telegraph all our decisions and document why we did this and why we did that. And when you have a model that is identifying those things, how do you monitor the model? And so I think all financial institutions that are really starting to invest heavily in AI are struggling with that. And I think to your point, regulation, the regulators regulations will catch up. I think there will sort of be a meeting of the minds where we learn the right way to do it and the regulators have the right perspective on how they want to go about it and it should mesh.

Paul Stockton (22:15):

Yeah, I totally agree. It to me, it kind of hearkens back to that layered approach again, the straight through processing all the way through AI that regulators are generally proficient at things that they understand and that have been well documented over time. So the regulations right now are basically based around what they expect in straight through processing, what they see coming through some light robotic process automation as well, somewhat in the machine learning vein, but AI regulation really has not come around the corner yet. It will and it will come a lot faster than any other regulation has. I kind of thought in my mind, okay, what was the speed of regulation in the past? And for many new technology, you could expect five years down the road, somebody will knock at the door and say, hey, maybe we should put some regulatory constraints around this and actually get compliance standards out there.

(23:13)

We are talking glass deal act or ISO standards, whatever we need to put in place, let us think five years down the road, that is now more like five months. So I would think companies, if you are in the vein where you are thinking, I am comfortable with my straight through processing, I maybe applied some light robotic process automation. If you are looking to take those next few steps into machine learning and applying some AI, maybe pump the brakes just a bit, take truly step by step, really give some conscious thought into what levels of automation that you are going to be comfortable with. I was kind of struck just flipping on the news earlier today and seeing that Samsung decided that no one across their company can pull the open chat GPT tools onto any Samsung device within their company. It is barred now. That kind of struck me as a bellwether moment where you say, okay, even the company, that type of AI could live on their devices, it could help them with coding as well, any number of very beneficial things that could be done, but there is also a vulnerability that they recognize as well. I think it is a great moment to pause, truly understand what levels of automation we are comfortable with and where you stand on that maturity curve today. Not every organization is built the same, and that is not a point default, it is just simply understanding where you are on that maturity curve and putting all your ducks in a row before you decide, yeah, I am ready to make that full faith loop. Right.

Tim Chambers (24:52):

Yeah, good Point. So Kevin, I wonder if you could share with our audience for those that may be thinking about AI or wanting to launch, I would be interested in if you are willing to share any kind of learnings or pitfalls maybe that you and team, and I know you guys are still working through, so to your point, You have not solved it, but how do I get started? Who are all the players, right? Because I mean, even your comments, you have talked about tech folks, model risk folks, right? Fraud teams. So it is obviously a village. You can not solve it by yourself and your team, right? So how can we help the audience with how to think about starting their journey?

Kevin C Slack (25:39):

Well, again, I would say it is all about the data. Again, if you do not have the data to solve the problem that you are trying to solve with AI, again, AI is not going to magically help you without the data. Another learning tool is you need the people. Again, and I touched on this earlier, as I mentioned in the Paul's Roomba example, the people need to understand what are the dimensions of the fraud we're trying to prevent, what are the dimensions of how do you disposition a claim in a very positive way for our customer? The model needs to know that mean AI models need a level of intelligence given to them upfront, and it is humans that do that based on largely traditional data analysis. So again, you need humans that under have the subject matter expertise in what you are trying to solve. I think we went through this, I think everyone goes through this where you think you can just throw AI developers at a problem and they are going to come up with a model to solve it.

(27:15)

Well, no, you need to throw data experts at the problem. You need to throw subject matter experts, fraud claims, customer experience, you know, need those individuals who are experts in what the problem is and how do we go about solving it to then really educate and true to the word train the model. And then of course, models can in some instances start to self-learn, start to know what to avoid. The Roomba starts to know what to avoid in the house. But again, you need a level of human subject matter expertise. AI technologies require technology folks with specialized background, so backgrounds in Python backgrounds in data tools. And one of the things that we have spent quite a bit of time on is giving our folks who may have not had the opportunity to work with those technologies yet, the ability to grow professionally, start to get the training to evolve into those technologies that you need to be familiar with in order to be successful in building models.

Tim Chambers (28:47):

Yeah, I think that is a great point, right, because I remember last I was in an institution and that was a conversation. It is like we also have to understand, right? There are not tens of thousands of advanced technology experts out there in AI. So it is not like we can just go hire a hundred people in two weeks from now. We can have the outputs. A very valid point, so I know I could ask a thousand questions, but I know we probably have only about a few minutes left, so want to open it up and see if we have questions from the audience Right here in the front. She is coming with the mic

Tim Chambers (29:30):

Right here.

Audience Member 1 (29:35):

How you doing? When you talk a little bit about the pitfalls of AI, were there any aha moments of AI sort the flip side of that coin where instead of, oh, this could be trouble, this, oh wow, this could be next, and how can banks take advantage of those aha moments?

Paul Stockton (29:59):

Come take a step. Yeah, yeah, absolutely. I think in the usage of AI and some of what I think of is the good, the bad, the ugly, there are definitely some of those aha moments. The benefits I mentioned IVR for instance, the ability to get a way more seamless experience using IVR. That is a great good. There is kind of a bad in there as well call it the homogenization of the experience with AI's like chat GPT in particular, you have to think of every output of chat GPT also becoming an input. The answer that it gives you is also going to go back into that network of answers and lose a little bit of its nuance over time. So I may ask a very valid question today, I get an answer, and then that goes back into that every time that question is then asked from here on out, it becomes a part of the greater pool of data and that information over time can start to kind of homogenize the experience.

(31:08)

Now, the thousandth person who asked that question, they might not get an answer that feels tailored enough to them. It is just going to be the same answer that they have already seen a hundred times already. So differentiating, I think when it comes to AI, when it comes to getting those responses, that level of differentiation starts to diminish over time. We as people are going to have to get a lot better at understanding nuance and providing nuance back to tools like better. Then you start to get into the truly ugly, these aha moments. In fact, you kind of mentioned them earlier, some of the aha moments of ransom situations, AI actually knowing enough about personalized information, being able to synthesize voice, being able to actually make a call that is horrific. Those kind of aha moments and abusive use of AI tools. They are kind of the thing of nightmares.

(32:09)

But what it comes from is really an intent and purpose. Understanding what kind of moments we can help build in a user experience that are beneficial. That is not kind of to take a theme from one of the prior sessions that is not just meant to make our institutions have to do less work to make it easier for your organization to do less work, but actually focused on the customer actually to improve their experience. I think that is where some of the biggest aha moments are going to come from is truly having an AI that listens to what benefits the customer experience and by the nature of AI is already lightweight enough for it to handle.

Tim Chambers (32:53):

All right. Anybody else? I think we are close to up on time, I think. Anything else?

Tim Chambers (33:01):

All right. Well thank you all very much Kevin, all thank you. Great job. And thank you to the audience.