AI is transforming financial crime compliance, but where is it delivering real results today? While much of the industry's focus has been on customer-facing AI, the biggest impact may be in streamlining compliance, reducing risk and improving decision-making behind the scenes. In this panel, industry participants share their insights into how AI is being used to enhance compliance workflows, improve risk assessment, reduce friction between clients and regulatory requirements, and increase collaboration between institutions and AI experts.
Transcription:
Alena Fedorenko (00:11):
All right. My God, I'm so loud. Can you hear me well? Yeah, definitely. I can definitely
(00:17):
Hear you. You definitely can hear me well. Okay, that's great. Hi everyone. Thank you so much for joining us today. Please come closer. I know it's a bit loud, it's a bit hard to hear when you are at the back. We have some free space here. So first of all, let me, I'm going to iterate today's session. So let me first describe what we are going to talk about. So as you probably know, regulation is tightening, fraud is increasing. Gen AI is not making our life easier. So are many things we need to control, and there are so many challenges we need to overcome, right? At the same time, gen ai, Gentech, ai, and other technologies are booming and they're providing new opportunities. And maybe some customer facing solutions like customer care, chatbots or some cases in marketing and sales, they're more visible, right? You can read about them and articles, et cetera.
(01:21):
Risk and compliance is not so visible, right? Companies are less vocal about this, and we all understand why. But today we would love to touch this topic. And the reason for that is, first of all, we all understand that risk is huge. Risk and compliance is a huge opportunity in terms of the potential impact from ai. I believe this function is in top five in banking in terms of potential impact. At the same time, while we see actually good adoption for generative AI in general in banking, so organizations already say that 70% of them, they actively use generative AI for risk. This number is 20%, and you clearly can see huge gap. So today we are going to touch this a bit, and we have two brilliant speakers that are going to walk us through this journey. And I want to give you a chance to introduce yourself. Oh, and did I introduce myself? No. Okay. So I'm Alena. I'm an Associate Partner at McKinsey. I co-lead our Gen AI and banking practice, and I happy to be here please.
Chris Brown (02:32):
Thank you, Alena. So, hi, good afternoon everybody. My name is Chris Brown. I'm the President of VASS Intelygenz. We are an AI consultancy and delivery and deployment organization, very focused and specialized on the delivery of production AI into large enterprise, predominantly into the financial services industry.
Tiffany Patrick (02:53):
Thanks, Chris. Tiffany Patrick. I am with Citibank. I cover the AML and compliance coverage for all of our services clients, payment intermediaries, foreign correspondent banks, and I work with the business to implement new data-driven solutions that are also compatible with compliance and regulation.
Alena Fedorenko (03:12):
Thank you. Thank you. And Tiffany, first question is to you. I already touched a bit, right? So such a big area, such a big opportunity for ai. Could you please explain us why?
Tiffany Patrick (03:24):
Sure. So again, you noticed the customer facing applicability, but really for a compliance organization all the way down from your analysts, all the way up to senior management, we are still responsible for understanding the risk that the clients and our clients' clients bring to our firms. Right now. Those processes are executed extremely manually. They are disparate across organizations. There's data all in different product processors and systems, and someone has to grab that and pull it together. So what we're seeing on the compliance side is that we have hired compliance fin crime professionals, and they're spending the majority of their time being data aggregators, not risk officers. And that's what we really need to solve for. I think there's been a little bit of apprehension from a compliance perspective to adopt AI because people just see, oh, it's ai, and there are models and there might be some bias, and I don't know how to explain it so I'm not going to touch it, but you really don't have to look at it in that perspective. There are problems that have existed as compliance organizations that get larger and larger regulations change. AI can provide those simple solutions upfront and then you can iterate off of that. So it's really starting to change. The mindset of AI is not this really big scary, big bang solution. It is something to improve and allow your employees and your risk experts to do their actual jobs and not be so concerned with aggregating data.
Alena Fedorenko (04:55):
Thank you. Thank you. And also, this area, it's so manual right now,
Tiffany Patrick (04:59):
Extremely manual. I think everybody here at any, whether it's a payment firm or a neobank or a Tradify institution, everything revolves around makers and checkers. And then we just keep adding makers and checkers to these processes. And we're fulfilling the regulatory requirement. But are we doing it in the best way? Are we doing it in the most consistent way? And those AI is a tool and not a replacement to help us get to that level of consistency and quality.
Chris Brown (05:30):
I think just to add a little bit, I am not even going to try and compete with Tiffany on compliance and risk and everything that Tiffany knows about compliance and risk is when we're working in financial institutions, particularly in back office, we spend, alright, we've been doing deployment of AI production for 10 years, but we spent 25 years on process automation. And I love what you say about AI as a tool. It enables us to automate more of the process. It enables us to automate processes that could not be automated prior to pride ai. So when you think about wherever there's a process in play and complex processes and long processes and things that can go a little bit awry in all different places because large variables in the processes, AI is just a tool. It's just a tool that's going to help you to augment what processes are already hopefully already automated to the maximum ability. But if they're not, then we're going to automate and we're going to add AI in order to make that happen. So we're very horizontal in our thinking of this. This is about moving data through a set of systems in a way that you answer questions, you get the answers in the most accurate way possible. And K-Y-C-M-L is a huge long, complex process in certain instances that needs better tools. And that's it.
Tiffany Patrick (07:05):
Alena and I were just talking before the panel and some of our larger institutional clients, payment intermediaries, they're so complex, they're so multinational. By the time you've updated the record and finished it, finally it's about time to update it again. And you're going to go through that whole process over again. And when you're able to just use smarter tools, you're not replacing human decisioning, you're keeping the, we've heard human in the loop for compliance professionals. I want to understand and have the audience understand it with me. We're keeping experts in the loop, not just humans. We're keeping the experts in the loop to do their risk compliance jobs. And that will not just benefit the risk side, but that has a client impact. You're not reaching out to the client every six months for new information or for information you already had and it was in a different data store. It is an end-to-end improvement on the client experience, essentially market capture even, and then your regulatory compliance all combined into one.
Alena Fedorenko (08:03):
Thank you. Thank you. And we already started talking about this a bit, but I think every discussion we have about ai, it eventually ends with, give us real examples, right? Because okay, that's great to talk about ai. Can you actually describe real examples and something that actually working not just piles experiments, like some vision for the future. So let's talk about real examples. Grace, I'm sure you have some
Chris Brown (08:31):
In fraud in the fraud area.
Alena Fedorenko (08:34):
For example,
Chris Brown (08:34):
I can talk about fraud. So if we think about where we get engaged in fraud examples in finance, it's pretty broad, right? Everything from scam phishing frauds to A-M-L-K-Y-C, which is under the umbrella of fraud and risk transaction fraud. Actually let me pick transaction fraud. I think that's a big, yeah, I'll pick transaction fraud because I think there's a lot of conversation. We know AI's got, it absolutely breaks the paradigm of rule-based fraud detection. We know that that's a totally understood and known fact now that it can detect much more. Well there's really, there's two reasons why AI becomes really, really impactful in transaction fraud detection. The first one is the nuance of the patents that it can detect, so it can get an improvement in their base over rule-based engine. But the other one, which is fundamental, which really gives it a step up is its ability to automatically move as the patterns of data and the patterns of fraud change.
(09:40):
Whereas if you imagine a rule-based engine is effectively a human written set of rules, and then so you have to have human detection of rule change and human detection of pattern change you don't need. So that gives you a massive step up just from a technology perspective. I'm not talking about our implementations, I'm talking about just from a technology perspective. So if I take that as an example, and I learned a lot of things as we go through this as well. So I've learned from our clients and our partners, so when you look at transaction fraud, you can increase your true positive detection. So you can do great things and you can increase true positive detection. But what happens is the money flows on false positive reduction where you see orders of magnitude change of money being able to flow. So the challenge doesn't become, hey, can you detect more fraud?
(10:35):
Which ordinarily is where I thought we would be going with this. Can you hold my detection of fraud and allow my false positives to reduce, to allow all of the transactions to flow? And that is really where we've been having appointment. So our latest appointment, we did a 56% increase on true positive, and that has tens of millions of impact, right? It's great, but we did a 72% decrease on false positive, which allows hundreds and hundreds of millions of dollars to flow. And that's where the big target has been because that's where the big money lies.
Alena Fedorenko (11:17):
And this is actually a very good example and they want to emphasize it because sometimes when people talk about generative ai, they assume that AI is just good for generating emails to your boss, editing texts, generating market campaign titles and all that stuff. But surprisingly, you can actually use generative AI for fraud recognition. And it surprisingly works well because the thing is generative AI is working with the meaning, not with the exact words. So it can understand the intent of the message and it can be much more accurate with detection.
Chris Brown (12:00):
So that's absolutely true. I will say in the example I just gave, just to be absolutely crystal clear, they were ML algorithm models, not generative AI models that create the true positive false sections. So wasn't that example I gave was not gen ai, it was absolutely ML algorithmic.
Alena Fedorenko (12:20):
That's also true. Remember about new methods, but also use something that worked for quite a while,
Chris Brown (12:27):
Don, don't get lost in hype. Use the best tool for the job to solve the outcome that you're looking to achieve.
Alena Fedorenko (12:34):
But I actually know a couple of examples of using engineer to pay for fraud notification
Chris Brown (12:38):
For sure,
Alena Fedorenko (12:39):
Quite well for, yeah,
Tiffany Patrick (12:40):
No, it's true because you talked about rules-based and for so long in transaction monitoring, that's what we had. That's what worked at the time. And it takes a good six months plus to do a new rules-based model, get all your evidence, run it through model risk management, prove it all out. Listen, by that time the bad guys have changed their mo, it's over. You're going to catch whoever's behind. You're not going to catch what's up front. And AI allows you to iterate, feed data, experiment, and get to those ripe and fit for purpose monitoring scenarios much, much quicker. Will any solution capture everything a hundred percent of the time? No. But it is far more efficient than a group and a team of individuals honestly spinning their wheels because the new tactics have changed. So using AI in that way and what's really important, again, it's not big bang.
(13:33):
I think a lot of compliance professionals are afraid that, okay, well if we do this and we switch everything else off and no one really understands it, what are we going to do? Use your sandbox environments. Banks are starting to move for this. Fintechs already have them use that Feed it data, then you're familiar with it because you can implement it, but if you're not familiar with it, then you're never going to explain it to the regulator that will fail. Innovating requires education, education of yourself, education to your stakeholders and making this an incremental habit to introduce, listen, we all had to learn how to use the internet at one point. It wasn't here before and it just became the norm. This is the new norm at some level it will be in your daily life and we need to take advantage of it to support us ourselves and not replace ourselves.
Chris Brown (14:21):
And I don't want to bore everybody, but give me one more minute, right? Because,
Tiffany Patrick (14:25):
For sure, please, please.
Chris Brown (14:26):
But you keep bringing up good points. You keep bringing up points. I want to be real. I want to be absolutely real about what we did, what we didn't do, what we can't do, what you can't do. And when I say we, I mean from a technological perspective in some fraud cases or transaction fraud cases in the majority of them, because you do want to be conscious of risk, you do want to be conscious of compliance. We didn't switch off rule-based systems. The rule-based systems remained, and these are the traditional IBMs, the safer payments and all of those guys, they did not go. What you create is the ability to automate and generate rules and also creating guardrails around what the AI is saying. So depending on the risk strategy of the organization, you're not flicking the switch and holding tight that this thing's going to work, right? There's millions, hundreds, and hundreds of millions of dollars at stake. So you have to have a deployment plan that matches that environment. So tools for the right job to do the right automation.
Alena Fedorenko (15:31):
That's true, that's true. Okay, let's move to how to make it trio, because I know and many organizations, they're prioritizing the use cases, they're experimenting, but after that nothing happens. And that's why we have this 20% adoption in the industry and not for the greatest use cases, just for basic use cases. So Tiffany, what do you think are the key roadblocks?
Tiffany Patrick (15:58):
Yeah, we have a lot of those in compliance. So just in our day-to-day life and now with introducing ai, again, I said it earlier, it's that education bit where we really have to let people understand and get familiar with what we're doing and take them on that journey, not just our internal stakeholders. Regulators are sort of jumping into this. They're responsible for now for supervising over this and they're on that learning curve as well. So where you can partner, where you can educate, please do it. All of our different approvals and risk checks that we've had for every other process in the bank, they weren't designed to look at ai. They're just sort of, well, we don't have any other process, so we're going to stick it in here. And we're sort of having to make sure that everybody approving these things is also aware of what's going on with ai.
(16:46):
So you have to be very cognizant. You can't just, whether it's model risk management, third party procurement, data risk, cross border data clearances have to start over again. In most cases. Understand that these, I don't want to call them roadblocks, they're more of slight delays if you will. There are steps that have to be done to protect the integrity of your firm, but they are definitely achievable, but it will be a longer road. So the sooner you start these conversations now, the easier it will be. And then make sure that you're bringing, if it's a third party, you're bringing that partner. If it's internal tech, you're bringing them with them to every conversation because you're not going to be able to move this through to even get to POC testing if you don't do that and keep them updated. I think that's also been key. You get excited that you've got all the paperwork done, now you can start and you never talk to those counterparties again, but you're going to have to come back when you want to do the next iteration. Give them those updates, make them feel involved, and then that will just uplift the entire culture.
Alena Fedorenko (17:49):
Love it. I would also add, basically, you touched it a bit, right? We are basically changing the way people work. So basically AI is not just one more two in your toolkit. This is generally changing the way you communicate with the world. I had some fun example. We built a POC of generating a mam, and it worked perfectly fine. It was super accurate, was actually more accurate than humans and people in some cases. And we just needed to implement it in the organization. We started doing interviews and we quickly realized that people are not using word for memos. They were working in Excel. And this Excel was also used for many different other things. And eventually it's not just the thing with implementing one solution. This is about transforming the whole value chain, the whole process end to end. And this is such a big thing.
Tiffany Patrick (18:50):
It absolutely is. I mean, we have our directives, we have our board priorities, and they're general and they're meant to be general so that you can create those solutions. So simplify KYC, implement ai. That's the direction. And now it's our job to understand how we create those solutions. And there's a wide range of solutions. It could be small, it could be big, it could be medium, and you bring in those partners, but we all want the same thing. We all want a better client experience. We all want a more streamlined and clear risk mitigation process because that's how we're going to ensure the longevity of our clients and all of our relationships.
Chris Brown (19:28):
And I think it's super easy in hype technology, and let's face it, AI is still hype technology and everyone's talking about it, it's super easy to get lost in the model and the data and versus LLM versus ML algorithmic, and everyone loves the sexy bit. They love the sexy, and it's about 20% of the solution. The rest is engineering. The rest is integration. The rest is about having data pre-processed about creating feature stores. It's about ensuring that the data can flow, but it's also about human centric design. If you forget that you are building software, we didn't forget that when we were, well, some organizations forget it, but on the whole good successful software doesn't forget about human centric design. There are users at the end of the day, we talked about keeping experts in the loop. Experts have to use this solution. So all of the focus goes into the super sexy beer because all want to talk about ai, but if you forget to bring your users along on the journey, you forget about human-centric solution design, you'll have an amazing piece of scientific and mathematical genius work that nobody will use because they don't use it, they don't trust it or they don't like it or for whatever reason.
(20:34):
So it's critical to not forget that you're building software for users.
Alena Fedorenko (20:40):
That's true, that's true. And I think one more challenge that I often see is how to find the balance between this amazing automation from ai, but also human in the loop, right? Because as Tiffany you said, it's not like we are substituting people. You need to find the right balance. Chris, maybe you had some experience with this one.
Chris Brown (21:02):
Yeah, massive experience. So at the very beginning of a project that you're going after, I think you have to understand your objectives, your goals, and I think there are some cases genuinely, there are some cases where, oh, let me say it's sub cases of solutions where you can take humans out of the loop. It's great. And we've seen it in history without ai, what time is the shop open today? I can press one. It'll give opening times. I mean, you're taking human, it's a super simple example. There are AI cases where you can take human out the loop, but you have to understand your environment. You have to understand your risk, you have to understand your audience's stakeholders. And by that I mean, oh, your customer's going to accept there's no human in the loop. Sometimes. Yeah, sometimes no. Or the regulator's going to accept that you've got humans in the loop or humans out the loop.
(21:53):
So it's about understand because it's never a hundred percent 0%. There are cases and sub cases, and it's about making that thinking happen at the very, very beginning of your project. Where am I trying to get to? And then also at the beginning that's executed at the end is what's your deployment strategy? So one of the big things that we like to do is to ensure that we are in technical production as fast as we possibly can get, right? Sometimes that's a bit more difficult in large organizations and sandboxes come in and all of that good stuff, but we want to be in technical production because you just cannot replicate technical production data. But the reason we talk about being in technical production is because if you talk about being in production, people think you're holistically in production. That means we're going to be using, it's going to be facing customers and everyone's like crapping themselves.
(22:50):
But no, but you want to be in technical production in shadow testing because again, I'll go back to a deployment strategy for AI I have never seen in all the years we've been doing is a deployment strategy, which is flick the switch and hold tight. And that's not a deployment strategy. You have to have a dial where you say, look, I'm going to run this whole thing in technical production shadow, test it, shadow test and key, right? Shadow, test it and I'm going to check where's it working? Well, where's it not, where do I need to improve it? Hey, it's working really well consistently over here. I'm going to turn the dial. So we're going to start to switch that on, and that might be with human without human in the loop, but you're starting a deployment strategy that you're just turning the dial where you get confidence and more confidence and more confidence.
(23:33):
And at some point you might be going back to the question of human in the loop of, Hey, I'm switching this on with human in the loop. I'm now super confident I'm going to switch this on and I'm going to take human out the loop. And this one might be, I'm never ever, ever, ever going to take human out the loop. It's not the intention. The intention is I need more throughput. I need K-Y-C-M-L faster than 120 days, and I can't economically scale at the point of humans to allow that to happen. So how do I make that happen with technology? So everything's about planning at the start, and soon you've got a good deployment strategy from the beginning and executing your deployment strategy.
Alena Fedorenko (24:09):
Love it. Love it. I think I love your idea. Some companies, they think that human in the loop is about just making sure people can check the answer, but it's also more complicated because it's about where are the right moments? What is the way you want to build this intervention, right? Because in many cases, it's not just like people reading the output and saying, yes, it's good. No, it's wrong. It's many cases. People checking the conversations between agents, people checking the diagrams of how the solution is working, right? Also, are you enabling your users with the right tools to do this check, right? Maybe adding confidence scores. No, that stuff that's really important.
Tiffany Patrick (24:56):
They're not just here to read the answer and just make data entry simpler. That's not the point. It's to assess, it's to be in the compliance space, to be that risk mitigation expert, to be that decision maker. Because at the end of the day, those individuals are responsible for saying, this deal is in risk tolerance, yes or no. This activity is in risk tolerance. Yes or no. And when they're able to have that information aggregated and presented in a succinct way, that's great, but their responsibility doesn't stop there. They're responsible for giving feedback to the tech teams, to the developers saying, we need to tune this a little bit better and make them an active participant in that process and be involved because they're not going to be able to just turn it on once and then you're done. So what it also creates as for those of you who've experimented with prompts or just AI on your own, it actually makes you really intentional in your critical thought because you put in a bad prompt or you put in bad code, you're going to get a bad output.
(25:54):
So it really makes you think, okay, what do I actually want out of this? I'm sure many of you have backlogs of tech requests that you've wanted for fixes for a very long time, but they weren't critical and they didn't make it to the book of work. They didn't make it to funding. But now you have the ability to look at that, where have all your pain points existed? How can you use a new tool that everybody says you have to have now in some form or another, and implement the change and iterate the change with those human experts as you go through that journey?
Alena Fedorenko (26:24):
Agree, agree. Maybe last question before we move to the audience, because I want to make sure they can ask their questions, at least a couple of them. It takes a village to build in a solution. So you actually need to manage your ecosystem of partners and everything. So what does it take to build the right partnership? Maybe question to both of you.
Chris Brown (26:50):
This might get embarrassing for Tiffany. So you need a Tiffany. Honestly, you need a Tiffany because there's a million reasons why not to do something. And when you're out there and you're pushing boundaries and you're put a new technology in, you need a partner that's willing to champion change that understands how to navigate complex organizations and to drag tractor tires through fields, as we talked about on a plane the other day, tractor tires through fields to get the organization to move. So for us, a great partner is someone that looks like Tiffany, right? That is willing to champion change through an organization that will allow it to happen because we can bring all of the technical skill, we can bring all of the experiences, we can bring all of the frameworks and all of the different model expertise and engineering and feature store data expertise, but we have no permission to go do that unless we get permission from our partner. So for us, a great partner is somebody that is absolutely ready and willing to challenge their organization to change. Sorry.
Tiffany Patrick (27:54):
Thanks.
Chris Brown (27:54):
It's a bit embarrasing,
Tiffany Patrick (27:55):
No, that's okay. What it really takes is find the use case that you are truly, truly passionate about because that will enable you to dig into the details, understand this inside and out, to the point where you can talk about it in your sleep because you truly, truly believe in it. And then you get your team behind you and you keep doing that education and almost evangelizing throughout the organization because we all have heard the rhetoric of we need ai, but who's going to do it? And as soon as you hit that first roadblock because you're so passionate about it, that's not going to stop you. You're going to keep going. You're going to get more buy-in because at the end of the day, this is the right thing to do. And then remember the impact that that's having on the individuals in your teams, in your firms that are waking up every day, coming to do their jobs, spending the majority of their waking hours with you. Is what you're doing helping change that? Is it helping change their lives, your client's lives and having a lasting impact that's truly measurable and impactful.
Chris Brown (28:51):
I know this, I'm a little biased because we come from a very technical background, but the technology is the least problem for us. The change and the permission and navigating the organization where these solutions are at their best at scale, they can work at lower scale, but when you've got big scale, you've got big value return. But those organizations are the most difficult to navigate. So the technology becomes the least of our issues. I know it's the height of the focus, but it's the least of the issue.
Alena Fedorenko (29:24):
That's true. That's true. Thank you. Thank you so much. I think we have time for a couple of questions if we have some in the audience.
Audience Member 1 (29:34):
Sure. Question for you. Have you noticed people becoming protective about their roles and how do you create cultural safety so that people don't see it as a threat, a little bit better people seeing it as a threat? How do you create cultural safety and how do you observe people being protective about their roles and opening up what it is they do because of that fear? How do you put that away?
Chris Brown (29:59):
Do you want me to answer that one? Well, I'll give you my view. We see it in a couple of different angles. So we see protectionism from our vantage point from where we deploy. So there's a little bit about the human-centric design, the solution design, the comfort around how is this going to get deployed, how is it going to change my life? You have to have that conversation with people and that sometimes can involve the completely change of role. It can involve, Hey, I need you to adopt a new system because new piece of software that's coming in your role isn't moving anywhere. It's not going anywhere. It's going to change, it's going to evolve. You're going to use new tools, techniques, and hopefully your world is going to feel a little bit better. So I think you have to have that conversation. The other place where we see resistance to change only because we're a third party supplier, is in the IT internal organizations where there's a new technology come along.
(30:51):
People have been pretty much could have been working on legacy systems for a long time. All of a sudden something new and exciting comes along and you're inviting a third party to come in. So we see a lot of protections on there. So we want to talk to our IT partners that are inside our clients to say, look, we're here to work together. We want you part of the project. This is not an aggressive takeover. So we have to navigate those conversations with humans. They're human beings. So we have to talk to human beings as human beings and explain what the change looks like.
Tiffany Patrick (31:21):
And I see that quite often when it's payment and innovation. So innovation means change and people are naturally averse to change. And I think you touched on it at the beginning, Chris. It comes into what's in it for them, what's in it for me? And you're approaching it with that conversation of this is going to make something easier. This is going to make your execution of your process better and bringing that to the table. And they're going to ask a million questions or maybe they'll get silent on you and they're not going to ask any questions at all. And then they'll go back and they'll talk about this amongst their peers. It's the consistent engagement that is going to make the difference. It can't be, alright, we're going to have a department wide meeting, we're going to have a town hall. AI's come in and they never see your face again.
(32:04):
That's not going to work. It has to be consistent, showing the follow-ups and ask and engaging them, asking those open-ended questions. Does this work for you? I want to make sure that you're good with this. Will you get a hundred percent of buy-in from everyone all the time? No, that will never happen. But bringing it up where it's fully transparent with what you're doing will help to stop some of those initial knee-jerk reactions of what's happening. And then allow them to, I think a lot of people get worried, my job's going to get replaced. Actually, if you learn this and you look at this, you've actually opened up a new career path and development option for yourself to where you can find a new challenge. You can get to that next level. You can navigate throughout the organization that's a benefit to them regardless of the process being implemented. That's a personal skill that can never be taken away from them. So I think people forget that. And that's also needs to be part of the conversation.
Alena Fedorenko (32:57):
And I love your point about transparency, right? Because many organizations, they're currently too scared to share, right? Because how do you share that you're going to decrease a headcount if you're going to decrease a headcount maybe you don't know. And being just in this very transparent and honest, constant conversation with your employees and sharing what you know and what you don't know, that's a critical part of equation because everybody's scared. And if you're silent, it doesn't help.
Chris Brown (33:29):
So 11 years, this is full truth, 11 years we've been doing this and from memory and admit you haven't got a great memory, but from memory, I don't remember a single implementation that has ended up in headcount reduction. I promise you. I know there's a big cliche about can change roles, not going to remove roles. And I don't think that's a hundred percent true because what happens is you start with, Hey, I'm going to do this for cost reduction. And what it ends up with is, well hold on. If I can do this, I can change the market and the growth opportunity is significantly higher than the cost reduction opportunity. So people start out with a conversation in cost because it's easy to justify, right? Because it's not predicted. You can calculate it, but you end up with, well hold on. And whether that's ticketing platforms, whether it's K-Y-C-A-M-L, the growth opportunity or the engagement opportunity or the experience opportunity uplift is significantly greater than any cost reduction that you can get. And that's where the project ends up heading.
Alena Fedorenko (34:38):
That's so true. I believe we're out of time and I'm sure you still have questions. We are going to be here. We are going to be around. Please find us and ask us. I'm sure Tiffany and Chris and I will find time to cover them. But thank you. Thank you so much. Thank you.
Chris Brown (34:55):
Thanks Alena. Thanks Tiffany.
AI in Financial Crime Compliance: Balancing Innovation, Risk, and Real-World Impact
June 2, 2025 1:00 PM
35:02