BankThink

To reap the full benefits of AI, bankers need to put in the work now

BankThink on AI in banking
Humans must remain in charge of gen AI systems as they are developed and implemented. Only humans can verify the accuracy of these systems and address issues of bias, which can inadvertently be introduced in the data or system model, cautions Synchrony's Carol Juel.
Paolese - stock.adobe.com

Generative AI has gotten everyone's attention. Companies are exploring ways to harness it for business advantage. And employees are eager to explore how the technology might help them do their jobs better.

So, should financial services leaders be all-in on this technology? Not so fast. While gen AI has immense potential, it simply cannot — yet — be trusted for real-world business operations at scale, such as credit decisioning. Companies must first determine how to use gen AI in an ethical and responsible manner while proactively mitigating risk.

It starts by treating it as you would any research project. Experiment with it. Test its limits internally before scaling to customer-facing products. Find out where it could be useful and create business value. Above all, keep it in house until all issues have been resolved.

And there are issues. We've all heard the troubling examples of "AI hallucination" — with gen AI systems making up information, statistics, dates, events, you name it. Because certain gen AI capabilities are built to predict the next word based on all the information it has been trained on, sometimes the information it returns is incorrect. Other times, gen AI combines accurate information in odd ways that add up to an incorrect result. It might report that Julius Caesar and George Washington were friends.

That's why content generated by AI must include explanations of how the system arrived at its decisions and actions. This is necessary for establishing trust and accountability — especially in financial services. For example, when a consumer is declined credit, we are required to explain why they were declined. Before gen AI can be introduced into any real-world process, we must prove that the AI-augmented workforce is making those lending decisions in a fair and explainable manner.

The word "augmented" is crucial. Humans must remain in charge of gen AI systems as they are developed and implemented. Only humans can verify the accuracy of these systems and address issues of bias, which can inadvertently be introduced in the data or system model. The best way to combat AI bias is by promoting diversity within the company. Employees with a variety of backgrounds and experiences will be far more sensitive to bias than a nondiverse workforce.

Australia plans to put Apple Pay and Google Pay under the same rules that apply to credit cards, NatWest is inviting customers into its branches for a tabletop game about the perils of financial scams and more.

November 29
Sydney opera house

But they need help. As AI transforms every role, businesses must be prepared to train their teams with the tech skills needed to succeed. We need to cultivate skill sets like data science, analytics, machine learning and software development, but also fast-emerging new skills like prompt engineering that can help shape the outputs that we get from a gen AI system. Cultivating a culture of lifelong learning will enable companies to sustain an AI-powered workforce. And training must be accompanied by sensible corporate governance policies that resolve questions, such as: How will data privacy be ensured? How will gen AI be held accountable for decisions it makes? Does copyright infringement apply when gen AI uses your data?

Financial services companies don't have to answer these questions in a vacuum. To broaden their perspectives, they might join groups such as the technology coordinating committee of the Business Roundtable and the Bank Policy Institute's technology policy division (BITS) and their AI working group.

Or they could establish an internal, cross-functional team tasked with examining the technology from a variety of angles and developing use cases. The team ideally should include leaders from legal, marketing, finance, HR and technology as well as from business units with P&L responsibilities.

These leaders will be well positioned to set up policies around data security, protecting intellectual property and determining what's accurate and what's not. Then they can move on to answer more complex questions such as, do we understand the outputs of AI well enough to make the right decisions regarding the technology use for our stakeholders?

Gen AI has generated a great deal of attention — most of it justified. Now we need to get it ready for business. This technology can in the future serve as a helpful and intelligent partner. But to drive sustained outcomes and trust, we need to put in the hard work and adopt a responsible approach to harnessing its full power.

For reprint and licensing requests for this article, click here.
Artificial intelligence Bank technology Diversity and equality
MORE FROM AMERICAN BANKER