A deer caught in the headlights poses a roadway danger, but that real-life situation — all too familiar to a Midwesterner like Bank of America executive Cathy Bessant — may hold some helpful lessons for shaping the development of AI.

Bessant made the unusual point ahead of an announcement Tuesday that BofA and the Harvard Kennedy School’s Belfer Center for Science and International Affairs had formed the Council on the Responsible Use of Artificial Intelligence.

What's the connection between Bambi and bots?

About a year and a half ago, Bessant, the bank's chief operations and technology officer, began thinking about all the things she was taught while learning to drive that are not written down in any textbook but shared by word of mouth.

“I grew up in Michigan and we were all taught to, no lie, ‘Hit the deer,’ ” she said. “Deer are attracted to headlights, they run out into the road a lot, and if you swerve into oncoming traffic or a ditch, that can often be a worse decision than the decision to hit the deer. But it’s counterintuitive to tell someone to hit something."

That made Bessant ask herself: "What happens if the algorithm that causes my car to make that decision is programmed by someone who ... never had that same experience?”

Put more broadly, artificial intelligence software is designed by humans and therefore subject to all the “potential biases or the decision-making brilliance and inadequacy of humans,” she said.

These questions prompted Bessant to consider the role that Bank of America and other financial institutions should play in ensuring that AI is used for good without creating too many unintended consequences.

Free rein?
Bank of America will not seek to control the Council on the Responsible Use of Artificial Intelligence, says Cathy Bessant, the bank's chief operations and technology officer. BofA is providing funding to get it started but wants academics from Harvard and MIT to set the agenda.

“The pace of change and the deployment of AI is happening at such a rate that it outstrips all our infrastructure: social, human, legal frameworks and workforce skilling,” Bessant said. “I worry that deployment will outpace or get ahead of our ability to understand and act on the implications of what we’re doing.”

She wanted to bring in an academic perspective and she wanted to create a neutral place where experts from different sectors and rival companies could discuss AI and craft good policies.

She approached Harvard University with her idea.

“One of the reasons we wanted to work with Harvard and with MIT is so that we weren’t alone in shaping the agenda of the council,” she said. “It was important to Bank of America to get the council going and to create foundational funding that would give it an opportunity to get up and running.”

BofA has a three-year commitment to fund the group but declined to share how much funding it is providing.

The bank will not control the agenda, Bessant said. Harvard will put the program together, and MIT will be involved.

According to Harvard Professor Dan Schrag, a strength of this program is that it is not just for computer scientists and tech companies, but for financial, service, retail and manufacturing companies as well as public servants, government leaders and academics.

How AI could go wrong

Cars using self-driving technology have been involved in fatal accidents. What are the other dangers of AI tech?

Schrag insisted this council is about more than concerns over the dangers of AI.

“It grows out of a recognition that there will be ethical challenges — there already are — and it’s important to think about them before the technology spreads and people feel betrayed by it,” he said. “We have to understand what the issues are, articulate them, think about them, perhaps take some actions where appropriate.”

That said, he did share one personal concern he has about AI.

“One of the most scary things is AI disabling the last vestige of what we think of as reliable, trusted information,” Schrag said.

Within the next 10 years, he believes artificial intelligence software will be able to create a fake video of anyone doing anything and that it will be forensically impossible to distinguish it from an actual video.

“Think about that, how scary that is in terms of Black Lives Matter or #MeToo or anything like this,” Schrag said. “Videos are the last vestige of reliable information because when there’s a YouTube video that goes viral, people believe it. If people say videos are no longer truthful, they’re worthless. That’s an issue, and one of many that will be discussed by the council.”

Bessant pointed out that financial institutions have a big impact on consumers’ lives and therefore a great duty to be responsible in their use of AI when it comes to extending credit, recommending investments and protecting customer funds, data and digital systems.

“Because we affect money, whether it’s the movement of money or the investment and return of money or it’s how capital markets work for companies and jobs, I believe we have a monumental responsibility to get it right,” she said.

Podcast: Bessant on the ethics of AI for bankers

All of these services "have to use models and algorithms and will be at their best when we can use predictive technologies, but we have to make sure that as we capture that growth and do what’s right and great for our customers and clients, that we’re also recognizing the potential pitfalls.”

Like any other program, an artificial intelligence program can be subject to “garbage in, garbage out” and drawing false conclusions based on flawed or incomplete data.

There is also the issue of what Bessant calls “data lineage” — being able to understand the accuracy of the data, and how it could have been changed between when it gets created and when it gets used.

Neither Bessant nor Schrag wanted to predict what the outcomes of the council will be, beyond research, the development of best practices, and dialogue.

“The participants are going to drive that agenda,” Bessant said. “I know that sounds ethereal, but I think it only works if we don’t predetermine the outcome.”

They did suggest four possible areas of focus for the new organization: consumer privacy, equal treatment of all market participants, transparency of AI technology and workforce impact.

Workforce changes are a top concern for Bessant given that AI will destroy some jobs and create others. She said she hopes companies make deliberate choices now to make these changes positive for workers.

“Those are not short-term, easy-to-solve issues,” she said.