Podcast

Citizens' plans for using generative AI ethically: Beth Johnson

Sponsored by

Transcription:
Penny Crosman (00:03):

Welcome to the American Banker Podcast. I'm Penny Crosman. What does responsible or ethical AI mean to a large bank? And what are people doing to make sure they keep their AI and advanced AI usage in line with ethical and responsible principles? We're here today with Beth Johnson, who is vice chair and chief experience officer and head of ESG of Citizens Financial Group. Welcome, Beth.

Beth Johnson (00:32):
Hi, Penny. Thanks for having me.

Penny Crosman (00:34):
Thanks for coming. So to start off, I'd like to ask you about your job title because "chief experience officer" is a relatively new title in the banking industry, I think, and I see in your bio that you oversee digital design, data and analytics, marketing communications, enterprise payment strategy and infrastructure, and a company-wide agile transformation effort as well as ESG. Now, that seems like a lot of different things. When I think of customer experience, I think of what does the mobile banking app look like? What does online banking look like? And I feel like your job goes way beyond that into a lot of other areas, including back office infrastructure and transformation. What does Chief Experience Officer mean to you?

Beth Johnson (01:25):
When Citizens thought about how do we build capabilities across our bank, and you touched on a lot of them in my responsibility here at Citizens, it was about building great capabilities to enable great customer experiences. So when we think about how we want to win, it is being a trusted advisor to our clients, whether they're large corporations, small businesses or consumers, and leveraging the tools that we need to do that. And so as I thought about these different areas of responsibility and what made sense from a title perspective, it was all the foundation in that it's great experiences through these capabilities that are going to enable us to win. And that's really how the title of chief experience officer was decided upon for this group about three years ago when we created it.

Penny Crosman (02:13):
Okay, interesting. And I feel like we're starting to see a few more chief experience officers here and there, and it's interesting to put all these things under the same umbrella with one person overseeing all of them. You also oversee ESG, which I think is really interesting. Given all of the political backlash and rhetoric around ESG, what do you think about all of that and have you been affected by it?

Beth Johnson (02:43):
Yeah, we added ESG to my responsibility about a year ago now, really to ensure we were elevating and thinking holistically as an organization about how we were driving our ESG efforts. And so what that's meant is just getting back to the basics. What are the things that matter most to us? And we outline it in four ways. The first is just to have very strong, robust corporate governance, and that's critical, I think, to running a bank. And we've seen it throughout the last year, running a safe and sound Bank has got to have great governance. The second one is around climate, so how are we thinking about supporting the transition and in particular supporting our clients in their transition as we think about moving to a lower carbon future. The third is around developing the workforce of the future, both internally and externally.

(03:36)

How do we partner with our clients in Philadelphia on what kind of skills they need in their business and then reach into the community to help train those skills. So a major focus of ours around workforce development and then finally fostering strong communities. I think a bank has a critical role to play in our communities and ensuring that we're providing the capital and the tools to invest and build those communities around us. So when we step back and we look at our four key priorities, what we found is our stakeholders agree, those are things we should be focused on, whether those our customers, whether those are regulators, whether those are investors. And so we've been able to focus on what we believe matters versus trying to get caught up in some of the words in the latest controversy. And that's worked quite well for us so far over the last year.

Penny Crosman (04:29):
That makes sense. And are there any particular projects or efforts around reducing the bank's carbon footprint that you could mention?

Beth Johnson (04:39):
Yes, and there are a couple, and we're just about to put out this year's report, our ESG report and our TCFD report, but we are doing a couple of things that I think are pretty innovative. So like many, we're working to reduce our own scope one and scope two emissions, and just being thoughtful about how we put in place practices that will enable us to be more thoughtful stewards of the environment. But we're also doing a lot of work on our business side and we have a real focus there to support our clients. So for example, we've launched products like our green deposit product, which enables our corporate clients to provide deposits with us that we then leverage to invest in lower carbon opportunities. We've also done some investment in wind farms, so we will have offset our own emissions with some of those recent wind farm investments. And then third, we have an effort to train all of our corporate bankers and our small business bankers to be able to have great conversations with their clients on how they should be thinking about supply chain issues or their own emissions and how they think about supporting their business. And so far, our conversations with clients have gone really well as we bring in some expertise to support them in their journey.

Penny Crosman (06:01):
So I'd like to shift gears and talk about AI because that just seems to be the topic everybody wants to talk about all the time these days. And it's a really broad question to ask you, but where would you say Citizens is using AI a lot today? I mean, is it everywhere? Are there particular areas where AI is really helping the bank to be effective and efficient and so forth?

Beth Johnson (06:28):
I've always been a fan. My background started in math and modeling, so I've always been a fan of thinking about analytics and now more recently, AI as a real driver of business value. So at Citizens, what we've thought about over the last several years is how do we do a couple of things to support that and to be able to leverage these tools? So first is investing in data foundations. Do we have the good data that's going to now allow us to leverage analytics and AI and now in the future Gen AI to make sure that we're delivering business value? The second is getting some of the foundational tools in place. Do we have the technology that we need? And the third, and I know we want to talk about this a little more later, but is a real focus on the risk and governance processes that's going to enable us to deliver these use cases throughout the bank.

(07:20)

And then what we've done is, I would say to get directly to your question is focused on three areas of use case. One is around customer tools. How do we start to provide insights to our customers, personalized messaging. Good example of that is we've done some work in college discovery, so how do we use some analytical and AI tools to help our customers understand where and they should go to school and how they should think about the financing of that plays into that decision. I think the other classic one that's in use today here in other places is chatbots. So how do we think about leveraging these tools directly to help support our customers and their servicing needs? Recently we're about to launch that not only in our consumer bank, but in our commercial bank as well with something we call Digital Butler to really support the commercial clients that we have.

(08:11)

The second set of use cases is in colleague tools. So our employee base, we have Jamie, which is an award-winning chatbot for prospective Citizens employees to help them in the recruiting process. We also do things like put in place tools for our colleagues to better search for career opportunities within the bank and how they think about how to train and manage their careers. And then I think the third, historically where we've seen a lot of great use cases for AI in particular is in places around risk management. So collections, fraud, cybersecurity and authentication. There are just tools that enable us to be more thoughtful and about how we protect the bank and how we protect our customers in today's environment.

Penny Crosman (09:00):
So those are good examples. What about the really advanced AI, like generative AI, like ChatGPT, and large language models? Is Citizens thinking about using those? Are you doing any testing or investigating?

Beth Johnson (09:18):
Yeah, I'm a big believer that this new technology is going to change the way we work going forward. And so at Citizens, we are being really thoughtful about how do we ensure that we're testing, learning, and starting to get some use cases in place leveraging this technology. Now, first thing we did, like many banks is actually within our network turn off ChatGPT. So we did not have people across the bank using it, but we did put together a task force that was comprised of data scientists as well as technology partners, as well as cybersecurity efforts, model governance, and our HR team, our talent development team to say, let's thoughtfully go out and start to identify use cases that might make sense across Citizens. We actually identified more than 90 if you believe it or not, but how do we start to think about this tool? And then what we did was say, let's get real about how we launch a couple of test cases. So we have some sandbox environments where we're testing the tools and then we're aligning across four key categories of potential use cases. Our goal is to have several in test and pilot by the end of this year.

Penny Crosman (10:39):
Can you think of any that would make sense to you? Is it for things like research and

Beth Johnson (10:46):
Yeah, we have four categories. So I think a lot of people are starting in the same areas, but we have four key categories of those 90 use cases that they boil down to where you can then think of them as stepping stones to get into new opportunities. So one is around knowledge management. So how do you think about leveraging gen AI and its ability to answer questions in the way we would ask them on large data sources quickly? And so imagine the contact center and our contact center agents and how do they search our knowledge management tools so they can serve questions from our customers faster, better, easier. It makes our customers happy and it makes our colleagues happy because they're able to serve our customers better. The second main category we're looking at is summarization. So how do I think about, I'm a relationship manager who has been working with a client, how do I summarize information I know about them and make sure that I'm going to be able to then think about the most relevant things and conversations to have with them.

(11:52)

So I think about those two. The third is content creation. So at what point do we start to leverage gen AI to do things like start to create marketing content that is not on our first use cases, but we're definitely thinking about content creation as well. And then the furthest out is reasoning. So when do I get to the point where I'm able to have these models and this technology do reasoning and potentially work directly with clients? We feel like that may come in the future, but we'll step our way into these different levels and the different use cases to be thoughtful as we investigate and start launching and using this tool.

Penny Crosman (12:31):
Some people have these big concerns, ethical concerns about AI in the world. Some people worry that it's going to take over jobs, some people worry it's going to start terminating people. There are all kinds of wild concerns that people have. Do you have any concerns about the growing use of AI in our society?

Beth Johnson (12:54):
I do. I think about it in two ways. My optimism probably outweighs my pessimism because I do think the ability for everyone to use these tools to be more impactful and effective at their jobs or at learning and their education is really powerful. That I think the real difference with gen AI is this is a tool that as we've seen with ChatGPT that everybody can use versus old AI models, were really the purview of data scientists and some very technical people. So I recently was talking to someone who's an educator in Australia, believe it or not, and they were thinking about how do you use gen AI as a tutor for all of the students throughout their educational system? And they had a belief that if you could do this right and get one-on-one tutoring, that you could take a C student and move them into a B student, or you could take a B student and move them into a student just by being able to be really personalized in that interaction model and support them where they need it.

(13:58)

So I think from that dimension, there's a lot of positive to this technology and what it can do in the world. I do think the risk, of course, is that are there people that will use this technology as bad actors and as a practitioner, as someone within a bank, I think we have to put guard rails in place in order to prevent that from happening in our ecosystem. I think there's a lot of education that needs to happen out there, so people are aware of these tools and what they can mean. But I think on the whole, we want to make sure that we leverage the positive to this given that the technology is out there and make sure we put the guardrails in place to guard against the negative size.

Penny Crosman (14:42):
That makes sense. And what about within Citizens Bank? What are your biggest concerns about deploying AI?

Beth Johnson (14:51):
So we have been really thoughtful about how we partner with our risk organization, how we partner with our cybersecurity organization to ensure that we're protecting our data. I think as a bank, we have a real responsibility given the amount of data that we have in the organization and on our clients to protect that data, to use it thoughtfully and ensure that we meet all privacy standards. So we are taking that approach. That is part of the reason why we'll start with human on top of AI and other areas of use cases, but we take that very seriously and we have a lot of guardrails and processes and practices in place to make sure that we achieve that.

Penny Crosman (15:36):
Can you talk a little bit about the kinds of guardrails you have in place, or is there a people component? Are there people in charge of responsible use of AI?

Beth Johnson (15:47):
We do. So we have in our second line of defense in our risk organization, we have a leader in charge of model governance, which is also, he also leads our practice on responsible AI to act as a check and balance against our teams that are developing these technologies. We'd leverage a lot of our existing risk processes. In addition, we have what's called Burke. It's basically a new initiative committee that reviews all of our new initiatives across the bank to look at any kind of risk operational, reputational strategic cyber to make sure we're being thoughtful as we launch new initiatives. So our AI initiatives will go through that as well. And so we've been really thoughtful about ensuring that we're leveraging what we have from our risk systems, but then also adding on, in addition to be thoughtful about these new generational AI, the new gen AI opportunities.

Penny Crosman (16:46):
Now obviously some people worry about the danger of bias creeping into AI. For instance, if something like location becomes a proxy for race or certain behaviors become a proxy for or act as a proxy for gender, that these models could start making decisions that kind of violate fair lending and that sort of thing. Is that something that you worry about?

Beth Johnson (17:13):
As I learn more and more about these models, there is a tendency for the models to give biased answers just based on the math and the way it builds the predictions. So I think we have to absolutely guard against that happening as we leverage this technology. And so learning the techniques and the tools and then monitoring and testing to ensure that we don't have biases built into our systems will be critical. That is a reason we would not go straight to things like underwriting practices and others with these tools because we don't want those biases to creep in. But it is something we're attuned to. We're modeling, and I think it's important in the communication on how to leverage these models that everybody understands.

Penny Crosman (18:01):
Do you think the industry as a whole is doing enough to make sure banks are responsible and ethical and their use of AI and get ahead of regulators that might want to police it?

Beth Johnson (18:14):
I think the banks take their data and their data privacy very seriously and have been forced to over time and really understand why this is important to our customers. So I do think the industry is taking it seriously. That said, I think we also need to make sure that our regulators are getting educated on the technology and the topic and that the regulations follow in a way that everybody understands the playing field. We're playing the same way, and that we can get the benefits, but guard against some of the risks with the technology.

Penny Crosman (18:47):
Sure. That makes sense. Well, Beth Johnson, thank you so much for joining us today and sharing your views on ESG and responsible AI, and to all of you, thank you for listening to the American Banker Podcast. I produced this episode with audio production by Kellie Malone Yee. Special thanks this week to Beth Johnson at Citizens Bank. Rate us, review us and subscribe to our content at www.americanbanker.com/subscribe. From American Banker, I'm Penny Crosman and thanks for listening.