A Modern-Day Playbook for AI-Driven Compliance and Risk Management

Perhaps nowhere is artificial intelligence more vital to digital banking than in identifying risks and ensuring compliance with regulations, while also increasing operational efficiency and futureproofing against known and unknown risks. If financial institutions want to develop a modern-day playbook for AI-driven compliance and risk management, then learn from these experts and practitioners on how to develop an integrated approach that will ensure a compliance framework that meets banks' needs in a complex regulatory environment with greater efficiency.


Transcription:

Holly Sraeel (00:10):

Okay, we need to get started. I'm Holly Sraeel, Senior Vice President of Live Media for American Banker. To my immediate left is Pinar Kip, EVP CIO, International Risk Governance and Transformation for State Street Bank, and to my far left, Carissa Robb, Partner, Banking and Financial Services Practice at SolomonEdwards. We're going to be talking about how to develop a modern day playbook for AI driven compliance and risk management. So let's get going. Can you guys tell me where do you think the industry stands in terms of adopting AI for risk identification and using it to ensure compliance with regulation?

Pinar Kip (00:50):

I'm happy to start. I would say very, very early days in terms of what the potential is in what we could do in the risk and compliance space in certain areas where data and pattern recognition is what the risk management process is, there's probably better use of not gen AI, but what we now call traditional AI, pattern recognition machine learning, natural language processing type of applications. But if you think about what risk management really is, which is triangulation of data and thinking about leading indicators of where risk could come, it is very, very early days. And I'd say part of it is because risk and compliance functions, and especially in the last year, have been very focused on determining how to manage the risk of AI and less time on thinking about how to adopt ai. And part of it is the technology is moving really, really fast and making the mess. Most of what is available to us is a very early journey and a lot of banks and a lot of my colleagues I speak to, have chosen to make most of their investments in the areas that they'll get the most traditional financial ROI, whether it's product innovation, whether it's productivity in their organization, space and risk and compliance is just coming ahead.

Carissa Robb (02:56):

Yeah, Going to have a place. And I think the folks that are balancing that where AI is a tool, not the tool, are seeing the most success when they balance that out with people, process and technology.

Holly Sraeel (03:09):

When we did our prep call, you raised something Carissa that I thought was interesting. You said that B of A's cease and desist order from the OCC for its unsafe and unsound anti-money laundering practices, that it would reflect the potential for AI maintenance use cases and talent as a defense mechanism. Could you talk a little bit about that?

Carissa Robb (03:30):

Sure. So the December, 2024 consent order against B of A highlighted something interesting. So it wasn't that there wasn't a framework in place, and this was specific to financial crimes, it was that they didn't properly maintain or sustain or refresh. And so the parallels that we saw in that consent order where there were deficiencies where new products or services would be introduced, new teams, new data, you see this a lot not for this particular consent order, but with mergers and acquisitions that there's a limitation in being able to constantly renew and refresh and so where folks are getting trapped. And I think this was the highlight of the regulators, which isn't getting a lot of attention because it was missing the billion dollar punchline. It was really focused on what are the internal steps you need to take to remediate framework and governance deficiencies. And it highlighted the ability to maintain.

(04:25):

So it was less about building, it wasn't saying that there wasn't a framework in place. It wasn't saying that there wasn't proper roles and responsibilities or a strategy. It was saying you haven't refreshed and you haven't kept up with the critical requirement of maintaining. And I think that's just something that we need to pay attention to in this a bit awkward phase of regulatory enforcement is it's not about whether or not you are setting it up. Can you sustain it, can you refresh it and can you keep pace? And the pace that's required for AI and ML is intense to say the least.

Holly Sraeel (05:01):

So let's talk about the most important considerations for banks to consider when developing an AI playbook for risk management, regulatory compliance. When we talked Pinar, you talked about that banks must be ready to fail and make mistakes. Can you talk a little bit about what you think about in terms of that?

Pinar Kip (05:20):

Absolutely. As I think about the best use cases for managing risk and compliance and AI's use in that, it is really that end-to-end value chain that really matters. So to use it effectively, banks need to, instead of spending a lot of time designing and thinking about where AI fits in their today's value chain, which is usually where people start, I'm doing this much manual work or I'm taking this many people to do this activity. If I can put AI here, I might be able to get 10 x speed, I might be able to get five x productivity, let me start implementing, but let me make sure that I think about all the considerations first, let me test it, let me evolve it. And by the time it's been six months, eight months all a year, and the technology has completely changed on you and you start the process all over again and not starting itself right now is probably the biggest risk that the banks face because until you start, it's very hard to know where the pitfalls for your organization is.

(06:24):

I think for most banks, definitely a lot more for global large scale banks, but I would argue it is just as true for one branch in a jurisdiction. The cleanliness of our data, the handoffs in our processes, the manual and bespoke processes people have added is the real challenges that exist in our ability to adopt ai. And until you start, it's very hard to know how that's going to come through. It's very hard to know how your people are going to interact with AI because many of us don't use AI on its own. It's always human in the loop, but designing that is very hard to do without getting started. So knowing how to in a safe environment, whether it's sandboxes, whether it's POCs, whether it's double check reviews start but start without years of design, start in thinking about the future state is really, really important. Not only to determine the best use cases for your organization, but also to understand how AI behaves and how you can manage the risk of AI in the best possible way.

Carissa Robb (07:34):

Yeah, I'll just, metadata and taxonomy is critical. And so I think we can fast forward and focus on the operational processor or the underwriting criteria or the model that we're trying to implement and fast forward or accelerate through the data strategy component and the data governance component. And that is critical in whether the AI strategy and ML implementation, especially in credit risk is going to be successful. And then I think the second piece, an evaluation of talent. Do you have the right people? Both, and I would say across all three lines of defense, not just a highly technical team that sits in the IT space or one division of a risk or compliance function, but across all three enterprises that have a comfort level with what you're trying to do with the new strategy, I think people underestimate the skills that are needed across all three lines and they hyper focus on building one particular team responsible for the implementation work.

Holly Sraeel (08:39):

Okay. Can we talk a little bit now about M&As and how effective banks aren't using AI for readiness assessment? I think Carissa, you brought that up.

Carissa Robb (08:48):

Yeah, so this is a critical area. This is all we're doing right now. And it's interesting how much manual support goes into that initial due diligence period, specifically to mergers and acquisitions. It's agnostic to the size of the bank. We're seeing some of the smaller banks be more willing to use tools that can accelerate the identification of duplicate data. Some of the cleaning and data quality, the larger guys are a little bit behind I think. And so we're seeing large m and a teams spend a considerable amount of time and effort and money on some of this data strategy and consolidation work where that's a perfect candidate for, again, going back to the metadata and the taxonomy of the data. That's the perfect opportunity in the due diligence phase before legal day one, where you're starting to identify some of the challenges that will make it really difficult to merge strategies.

(09:45):

So you can actually use AI to help you implement AI and see around those corners and just get ahead of some of the common challenges that we see on the merger and acquisition side. That again, I think we've said this, it goes back to data. So people process technology and then the sub element of the technology is the data. And I think that's the most important thing is when you're focusing on alignment of products, alignment of services, alignment of staff, the alignment of data, and then the alignment of the use cases for ai, I think is just now the fourth element in any critical strategy for merger and acquisition.

Pinar Kip (10:23):

And I think there's a lot of parallels to your example in other spaces from a business model perspective too, because m and a I think may be one of the best examples of when you are implementing a deal, you agree to after due diligence that there's a lot of aha moments you have because you realize you haven't spoken the same terms, you didn't ask about 1% of the business. That actually turns out to be really important. And AI is really powerful in terms of looking at very large data sets with very historical lineage and the surround sound to give you trends and opportunities that you can choose to ignore. But before just given the cost of that, we would never think about doing. And there's a lot of parallels in terms of product launches, market entries, reorganizations within your firm that a lot of AI modeling could give us insights that traditionally we treat as learnings of transformation, learnings of implementation.

Holly Sraeel (11:26):

Can we talk a bit about the cultural implications of AI usage when we did our call, I think Pinar, you talked about trust in AI versus trust in people. You want to talk a little bit about that?

Pinar Kip (11:40):

Of course. What we are seeing, and I know many of my colleagues are seeing that AI adoption is especially hard and gen AI adoption is especially hard as it's difficult for people who are experts in their roles to imagine some of the judgments, some of the triangulation they bring to the table being replaced by ai. I think over time, collectively as an industry, we got comfortable with our manual activities being replaced by automation activities and that has been replaced with the workload that has increased as a result. But AI has been a very interesting conversation. So even as we think about the responsible AI frameworks we have put in place to manage the ai, it doesn't really apply as effectively when gen AI is in place because the questions, and not to be shared outside, but in our own we have a committee called Arrow and we have cross-section of, to your 0.3 lines of defense and business and technology around the table.

(12:48):

And as we are having conversations around whether or not it's appropriate to use AI in our employee copilots to ask about our investment policies, the kind of questions that are coming in are, well, that's a very complex policy. It's hard for me to know the answer. How will AI know the answer? And that's a very natural human reaction to determine. But AI works in a very different way. It doesn't have to be an expert anything, it just needs to process anything. That cultural change around it won't replace you, it'll supplement you, is taking a very long time. The other piece, especially as we are stepping into gen AI and leveraging agenting AI workflows is the need to train everybody about what AI can do and should do, not in a way that they have to code necessarily ai and maybe they don't even have to code their agents, but understand how that works in a way that they can share where they are in the workflow so we can get the best out of it. And that's going to be especially important on the risk management side as this gets democratized. And more and more, as much as I think the bigger banks have been slower to adopt some of the copilot and LM usage, they need to. And when it's in the fingertips of thousands and tens of thousands of employees, managing that risk just requires a very different way of thinking, a very different way of cultural embracing that I don't think they have done at this speed in the past before.

Carissa Robb (14:21):

Yeah, I'll share two stories. So I was head of ops for one of the top 10 banks. You can tell where my wardrobe comes from. When we were first implementing this in the first line, folks were really, really nervous. And this was several years ago and they were expecting a van to come up with literal robots and come out and take their jobs. And so we've come so far in training the first line to understand the benefits of AI and machine learning and gen ai. The challenge is there's not a lot of trust in the people of AI and what the expected output is because there's not a lot of training that goes into how you get the answer. And so it feels very much like a black box, which causes a lot of intimidation. And so I think there's a deficiency when you're replacing or enhancing the opex side.

(15:14):

There's a deficiency in how we communicate what this means to the first line. The second risk, and this is maybe across all three lines of defense, is an over the exact opposite end of the spectrum where there's an over-reliance or overconfidence in what AI is producing. I'm sure you've all received emails that are clearly written by ChatGPT, and they can't tell you what it means. And so that's another, it has to be a partnership. And I think with cultural strategies and certainly with communication you have to do, we have to do a better job at merging the relationship between AI and people and the people provide the context and the people provide the training and influence the training of the bots. Otherwise you're going to make it faster to introduce bias into your organization and you'll have a disconnect on what success looks like, what the expected outcome is across the organization.

Holly Sraeel (16:10):

I think Pinar, you talked at State Street about in the cyber area that AI brings increased sophistication of threats and vulnerabilities, but that the bank is looking at creating leading risk indicators versus lagging standards. Can you talk a little bit about that?

Pinar Kip (16:26):

Of course. So I know there was a session yesterday that went into this topic in depth, so I won't repeat that. But in terms of risks that AI creates in the security area, it's really increasing at an in rate as we have before. And to take something as basic as phishing. Phishing has been around forever. We have tools to detect it. We are all know, well try to know what a phishing email looks like, what a phishing text looks like. But what AI is allowing now is very, very unique personalization of an email that that knows who your boss is and that knows where you work and that knows where your kid goes to school, all separate public information that by themselves maybe don't matter a lot, but when they triangulate and show up to you as an email, you are a lot more trusted to click that link.

(17:18):

You are a lot more trusted to respond. And that risk is a lot harder to detect with our usual tools because it doesn't have the anomalies that are usually the surefire sides. And our culture training only goes a certain amount of that. So that's the type of threat we have been working on in the cyberspace. And the best way to, again, this kind of goes to my point around we have to engage in order to know how to use this, the best way to know how to defend it is to understand how the threat actors are acting, how AI is actually being used so we can defend against it. The same way to your point, one of the things we have been looking at in the cyberspace and our risk space more broadly as well requires kind of acknowledging what doesn't work too well today.

(18:07):

So if we think about a lot of our KRIs in most organizations around most processes, and while there are exceptions, they are very lagging indicator focused. What are our key risk indicators? How have they been trending? What can we do about it? What can we address? Partly because the amount of effort and resources that it would take to determine the leading indicators was too expensive to give us that ROI AI changes that. So we have been really trying to figure out what are the leading indicators that can tell us when a risk is about to happen or in six months or may, probably not a year, that's too long a determination to determine, but so that we can invest and it's okay for those leading indicators to be not perfect because the ROI of getting them is a lot faster and determining how we can balance them in our risk management journey.

(18:59):

And overall that from a risk professional perspective, accepting that while we do a lot to keep our organization secure and to be regulatory compliant, we do everything within the risk tolerance our banks, our companies hold, which means there is an element of that risk that today isn't as effectively potentially managed, then AI could open the aperture to do so. And back to your culture point, that acknowledgement that there are some things within our existing tools and people we can't do today that we might be able to do differently is also a very big opportunity that AI is offering up for us.

Carissa Robb (19:38):

We spent a lot of time in 2024, I think the OCC was hyper-focused on forward-looking risk indicators in the commercial space. And we spent a lot of time supporting clients on generating the KPIs that were forward-looking and that part was easy. And so we could very easily identify what we wanted to look at that would give us a forward, a leading indicator on what sort of volatility or resiliency existed in that particular area. Where it fell apart was the introduction of AI and ML would've strengthened the output. What is it really telling you? Because if you just introduce forward-looking indicators, it can often be a dashboard that doesn't give you a lot of insight. And so that goes back to the context and the use case of what is it that you're looking to strengthen as it pertains in this particular instance to managing portfolio risk and looking ahead.

(20:30):

And so that's where the introduction becomes critical, where you can start to see the combination of data and other insights that you wouldn't typically look at or you look at in a silo that doesn't tell a story. It doesn't give you additional insight to manage your business. And so that goes back to the timing. There wasn't an investment in the cleaning up of the data to precede the implementation of AI and ML that would then be more impactful in using AI and ML for purposes of forward-looking risk assessments and indicators. So again, it's the timing that we're seeing over and over again of the people understand where they need to go. I think they have a very clear understanding of where they're at as an organization and where they want to get to. But the investment in that data component, which gets tricky with a merger and acquisition landscapes is lagging. And so your AI strategy by default will be lagging until that speeds up.

Holly Sraeel (21:25):

Okay. Quick question. What appears to be a relaxing or more fluid regulatory environment, the rapid expansion in use cases of AI? Are you concerned that potential harm could be done?

Carissa Robb (21:39):

Yes, yes, yes. So I think if I look around the room, we've all been through multiple cycles of regulatory swing, it's going to come back. Whenever it does, it's going to come back. And so the work that you're doing now, the models that you're introducing, the use cases that you're introducing will be evaluated. And what I like to advise folks is think of this three and a half year period of you're building the inventory and the volume that will be subject to regulatory review in year four, year eight, whatever it might be. And so you have to be really intentional regardless of what the regulatory bite is. I think on the punchlines of the consent orders and public fines, it's part of your sample population that will be evaluated. And so you just have to govern yourself.

Pinar Kip (22:28):

Yeah, I agree with that. And we are a global firm, so we have to, and some of our regulators are actually tightening, not easing up in this space. So for us and for many other global firms, it's really important to, I agree with you have your own framework that yes takes into account what regulatory guidance is and usually you end up needing to go with what the highest bar that's available is, but then have a culture, have your own internal frameworks that ensure that you're adopting that and you're relying to that. So if a new regulator or the US regulator comes up with a different stent, it's a lot easier to explain why what you are doing, why you are doing versus trying to adopt the new letter of the law that comes your way.

Holly Sraeel (23:14):

Okay. Before we throw it to questions from the audience, let me ask one last question. Looking ahead, how will organizations manage risks differently with AI? If you could throw out three ways,

Pinar Kip (23:25):

I'd say one, they'll manage it end to end across their silos, across their risk areas versus within, because data will allow them to do that. Two, they are going to use it on a much more frequent basis. So a lot more day-to-day, hour to hour risk management, which today is not available. And three, they'll have the level of transparency, especially if they adopt AI the right way, they'll have the level of transparency to make decisions and make investments that in the past they weren't able to, but it is going to require a level of risk investment we may not have seen in the past.

Carissa Robb (24:02):

Yeah. I'll just add one additional thing, which is expect to fail and test and retest and introduce the concepts over and over again. I think that that level of failure of it's not going to work the first time and it's going to get better the more you put into it is the last element to consider.

Holly Sraeel (24:20):

And I think Pinar, you had said to me earlier that speed and regulatory burdens must coexist.

Pinar Kip (24:27):

It really does. And actually regulators at the end of the day are master triangulator in terms of what you have done wrong in one space, what you're doing in another space and how that comes through. AI allows most of our firms to do that before they come and tell us. So there is a big compliance element of using AI if done responsibly. So I think that speed is going to need to keep up with that and I do think regulators will want us to use it for, use that for good while of course, following the rules.

Holly Sraeel (24:58):

Alright, questions from the audience? Anybody? No questions. Okay. Thank you.

Audience Member 1 (25:07):

Thank you for a very insightful session. My two part one is a comment that you made a comment around the culture. I hundred percent agree that the culture across the three lines is so important and critical to adopt the new technology and changes. Just not one party can do that. And the story around policy portal and using ai, we have implemented that successfully. So I know that works no matter how complex the policy is, AI can do that problem. My question to you is that based on your experience, what do you see in the risk and compliance area changing in next 6 to 12 months? Something going to production.

Carissa Robb (25:55):

So what do we see changing in the next 6 to 12 months for compliance and risk? So I'll just start with risk. I think the triangulation of data, the inclusion of more data is going to strengthen the portfolio assessments and also that's going to impact risk. I think that needs to drive the product development. And so I think you mentioned it's cyclical, like go end to end. And so I think you'll see a stronger correlation between historical assessment mixed with insightful data from the AI ML models that will make stronger products at the beginning. Sometimes we introduce products and then we walk it back. We'll see what happens with crypto as banks get into crypto. So I think we're going to have a more sound approach for product development that will improve portfolio performance with less trial and error or trial by fire on the risk side.

Pinar Kip (26:49):

Yeah, I agree with that. And I mean, I know we touched on data a lot here, but I think it's such a foundational component. So as we think about policy and risk, I think a lot of organizations and regulators are going to further understand the importance of having that data, not only Nang cleanliness, but the quality and the stability. So I do think as we use more AI and there's more openness to that, the more I think I expect, maybe not in the six months, but in a longer term frame, more increased policy focus on the data quality side.

Holly Sraeel (27:23):

We have time for one more question. That's it. Alright. Join me in thanking Carissa and Pinar for their insight today. Thank you.