Bank of America's AI approach: 'Productive paranoia'

While a number of companies are talking about the responsible use of artificial intelligence, most are vague about what it means and how to do it.

In a fireside chat at American Banker's BankAI conference in Chicago this week, Sumeet Chabria, global business services executive at Bank of America, discussed how his bank approaches responsible AI.

Chabria, who leads more than 22,000 employees who provide technology services across the company and reports directly to Chief Operations and Technology Officer Cathy Bessant, emphasized the importance of oversight by human beings in developing AI capabilities.

"We never removed the human from the AI loop," he said. "We make sure people are deeply involved in understanding and documenting what the system needs to do. Subject matter experts and business are deeply involved. Business experts are closely involved, from pre-development to development, to testing, to training, really understanding what is the outcome of the AI to the customer."

Following is an transcript of the discussion, which has been edited for lengthy and clarity:

What does responsible AI mean to you?

SUMEET CHABRIA: It means doing it right. We are in the trust business in banking. Consumers trust us with their money and their personal data. They trust us with making sure they can retire when they want to. Companies trust us to help them grow and create new jobs. These are very important things. Trust is earned, and that comes from responsibility.

Sumeet Chabria, global business services executive, Bank of America

Responsibility to me means several things. One is being really, really customer focused and customer led. That means deeply understanding what the customer wants and what is the consequence of a solution to the customer. So if you're building AI, how will that impact the customer?

The second thing it means is being process led. Making sure the business process dictates the solution that you employ. Understand the business need and the process that runs that need and see if AI fits into that as a solution or not. Because there are lots of ways to solve the problem.

Making sure we have rigorous procedures, standards, policies and governance frameworks. That's a must.

Making sure you have diversity. I think that's hugely important. Diverse teams produce the best outcomes. They help you mitigate any risks you have, make sure that the systems are tested and trained properly with all viewpoints included, including making sure the people who use AI systems are diverse.

And lastly, making sure that human oversight remains at the forefront. We produce the best outcomes when people and AI work together. People need to be ultimately accountable for all the outcomes generated by the AI.

Do you have people whose job it is to oversee the AI engines?

We never removed the human from the AI loop. We make sure people are deeply involved in understanding and documenting what the system needs to do. Subject matter experts and business are deeply involved. Business experts are closely involved, from pre-development to development, to testing, to training, really understanding what is the outcome of the AI to the customer.

Oversight means that you really understand how an outcome generated by the AI will really impact the customer and what could go wrong? Have a healthy dose of, as Jim Collins would say, productive paranoia, which is, things could go wrong and how do you mitigate that risk.

When you talk about trust and risk, are there places where you would just not use AI? Where it's just not appropriate and there's too much that can go wrong?

It's important to have principles that guide you. We have a set of principles that are, can do versus should do. Even if you can do AI, should you do it? AI may not be the right solution. AI is a very powerful technology. It is the right solution for lots of things. But there are lots of ways to solve a problem. Like when I look at an operational problem, sometimes you can solve it with good old re-engineering techniques. So understanding why you're using the AI, what is the outcome of AI for the customer, what is the benefit, what could go wrong? And you decide if you can mitigate that risk. If you can't, I would say don't use AI.

There are a lot of concerns that an AI engine might look at the characteristics of people who performs well in and organization, and end up with a lot of people who are members of the same golf club and that could end up being a very homogeneous looking group of people. What are your thoughts?

We have decided, as Cathy has said before, not to use AI for that. If it's a lot of historical data, there will be historical biases, and there’s a risk of repeating those biases. So I think case by case we have to make a decision.

We have a risk framework in the bank where we identify all the risks around a project. And we deeply analyze and debate it. Are we not comfortable taking that risk? We don't do it.

You have to look at it from a consequential point of view. What is the consequence of that outcome to the customer? If Netflix gives me the wrong movie recommendation, there’s not a huge consequence to me. If you make a bad decision on hiring somebody or giving somebody an admission to a particular school, somebody's dreams could be shattered.

Can you share some of the places where you've said yes to AI?

Erica is a great example. Erica basically answers your questions proactively using insights and guidance to help you manage your finances and stay on top of the activity in the bank accounts. So when we talk to our customers, they want to answer many questions: how much did I spend on groceries last month? Do I have any bills coming due?

Erica can proactively guide you and alert you if you have a transaction that's been posted twice on the account. The take up is phenomenal. Eight million users and 60 million answered questions.

How do you look at fairness?

We have to comply with a spectrum of laws, rules and regulations globally. We have the Fair Lending Act. We have the Fair Credit Reporting act. We have to comply with GDPR regulations.

We have a permitting process at Bank of America. There's a control framework and a model risk management team that has to approve all models.

Another thing people talk about a lot is the idea that an AI model could be infused with the bias of the developers that created it. Is that something you think about?

It’s very, very important to have diversity across the board. Diversity of thought, diversity of gender, race and cultural diversity. If you have a diverse team of developers and business experts that are involved in testing and training and ultimately using that platform when it goes into production, you make sure that every viewpoint is incorporated because people bring different perspectives to the table.

It's very important that people can identify, escalate and debate all of risks as they go through the process, and measure and monitor those risks, figuring out how you're going to monitor it over time when it's in production and control it.

Productive paranoia and test, test, test and test.

When you talk about diverse teams, it seems to me pretty much every year we see the ranks of women and minorities coming out of STEM programs growing smaller. How do you manage around that?

I think we have an obligation in society. We all have to come together to change that. I know there's a lot of discussion going on across the industry about changing that and retraining and reskilling the workforce rather than just relying on universities. We’re involved in partnering with schools and institutions across the country. We are part of Girls Who Code and many other organizations trying to change that.

We have a massive retraining and reskilling effort in technology and operations.

Cathy Bessant spearheaded the creation of an online technology and operations university, where about 95,000 people can take online courses. It's self-directed and you can take the course at your own pace.

It is a bit of a myth that AI requires only deep technical experts that know Python or Java. There's lots of things you need to do and lots of skills.

What does the future of work look like when more companies are adopting AI? One analyst has estimated that 96% of “cubicle jobs” will disappear. How do you see jobs changing?

That’s one of the concerns beyond this whole topic of responsibility, this whole misinformation or exaggeration on the doomsday scenario. We don't believe in that. We believe that jobs are not going to go away as much as people talk about, but the work is going to change and the skills required to do that work will change. In banking, we still have to process payments — 6 billion checks a year get processed by Bank of America.

AI should increase the GDP. It should increase the size of the economy and create global growth.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Operations Risk management Software development Predictive modeling Data modeling Bank of America BankAI Conference
MORE FROM AMERICAN BANKER