BankThink

Bank boards need to reckon — right now — with the risks of AI systems

BankThink on AI replacing your job but creating new job opportunities
Too many banks' boards are unprepared to reckon with the manifold risks that AI-enabled systems and services present, write Sandra Galletti and Steve Goldman, of MIT's Crisis Management and Business Resiliency program.
Adobe Stock

As banks accelerate adoption of artificial intelligence, the promise of greater efficiency and precision is colliding with a sobering reality: AI is not just an innovation opportunity; it is an emerging risk driver that requires executive and board-level attention. Accordingly, boards should treat AI as a risk item and require controls before scale-up.

While the operational rewards are real, the risks are consequential and often underestimated. AI's complexity, limited transparency and reliance on sensitive data present challenges that traditional risk frameworks were not designed to manage.

For example, several U.S. banks have already deployed generative AI across key business functions. JPMorgan, Goldman Sachs, Citi, and Morgan Stanley have deployed generative AI to support client service, productivity, and internal knowledge access.

These deployments mark a turning point: As AI systems begin shaping decisions and client interactions at scale, banks must approach AI governance as a core enterprise priority. In practice, before deploying or scaling AI systems, directors should insist on risk assessments and mitigation plans.

First, banks must rethink operational risk. AI introduces risk rooted in autonomy, limited transparency, and reliance on third-party data and models. When embedded into core processes such as credit decisioning and fraud detection, AI models can trigger anomalies or disruptions. Third-party providers add complexity with outsourced models and external training data, requiring thorough vetting and monitoring. To mitigate these risks, banks should validate models in controlled environments, establish input-output traceability and embed human oversight at critical control points in risk-sensitive functions. Fallback mechanisms in case of system malfunction and continuous performance monitoring are nonnegotiable. Operational safeguards should be developed with IT, risk management, modeling, operations and vendor management teams. The board should see evidence these safeguards hold up under stress, not just on paper.

Alongside operational considerations, compliance risk rises significantly with algorithmic decision-making. AI systems are rapidly entering domains governed by strict regulatory expectations, yet they often lack transparency, traceability and auditability required by regulators. U.S. banks must align AI use with supervisory guidance from the Federal Reserve, the Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation and the Consumer Financial Protection Bureau. European banks face the General Data Protection Regulation, the EU AI Act, the Digital Operational Resilience Act and the European Banking Authority's outsourcing guidelines.

Improper data handling may lead to violations of data-protection laws and transparency failures. These risks are exacerbated in high-impact areas like credit underwriting, fraud monitoring and regulatory reporting. To reduce compliance exposure, banks should complete a privacy impact assessment, or PIA, before deployment; engage legal and data-protection experts early in the process; and define internal policies. Explanations of algorithmic logic should be accessible to both customers and internal reviewers. Additionally, effective controls must be implemented to manage data sources and prevent unauthorized processing. Boards should expect management to substantiate compliance with clear documentation.

Srini Nallasivan, who was born in India, claims in a lawsuit that the Minneapolis-based bank harassed and fired him in order to replace him with a white executive. U.S. Bank denies the accusations.

August 18
U.S. Bank

Beyond compliance, AI expands the legal risk landscape. Even if AI systems are compliant, they can still expose banks to litigation. A single flawed algorithmic decision may violate civil, commercial, or labor laws and trigger litigation. Banks remain legally accountable for the outcomes of automated systems. Errors in logic, biased training data or insufficient oversight do not transfer liability; they heighten it. To reduce exposure, banks must ensure legal review throughout development and deployment. This includes review of liability and insurance clauses in contracts with AI vendors, with attention to intellectual property rights, indemnity provisions and error-handling responsibilities. Automated decision rules should be assessed for potential legal exposure, particularly where discriminatory or harmful outcomes could occur. Maintaining active legal oversight of AI use within the institution is critical. Accordingly, boards should demand periodic legal risk briefings tied to AI deployments.

Moreover, closing the governance gap in AI adoption is essential. AI systems introduced without clear governance pose institutional risks. Many AI tools originate in innovation labs or vendor-led deployments, often operating outside formal control structures, creating fragmented oversight and unclear accountability. To ensure alignment with regulatory expectations and internal policy, banks must establish centralized AI governance frameworks. Robust governance must also extend to third-party vendors and service providers, aligning with U.S. interagency guidance on third-party risk management. Explainability standards must be a core governance component, enabling effective internal and external reviews. Banks need structured AI governance, with clear role definitions, approval protocols, cross-disciplinary committees and institutionalized policies. Internal audit should be engaged from the outset, and the board should confirm that its responsibilities for AI oversight are clearly defined.

In addition, banks must recognize that AI ethics isn't optional. AI systems trained on biased data tend to replicate and even intensify those biases in their outputs. From credit scoring to behavioral analytics and complaint handling, AI systems risk amplifying social and economic disparities. Banks must implement formal mechanisms to detect, assess and mitigate harmful bias, including independent review of datasets, embedding fairness criteria into model development and regular ethical impact assessments. For that reason, boards should insist on clear fairness metrics, red lines and escalation paths when harm is detected.

Finally, reputational risk is on the line. When AI decisions appear opaque or unfair, trust can erode rapidly. When AI-driven outcomes appear unfair or are publicly contested, reputational fallout can follow. Banks must proactively embed explainability, human oversight and clear communication strategies into high-impact AI applications. Communication protocols should transparently disclose AI use to stakeholders, provide accessible explanations and ensure frontline staff are well-trained to manage client interactions. Directors should test how the bank will explain an AI-driven loan denial or account freeze before it appears on the front page.

In summary, artificial intelligence offers banks meaningful opportunities to enhance efficiency, competitiveness and service quality. However, institutions that deploy AI without rigorous controls may invite operational failures, regulatory challenges, legal exposure or reputational harm. There are no off-the-shelf solutions or shortcuts. Each deployment requires thorough risk evaluation, transparent design and collaboration across all relevant functions. Ultimately, the board's job is to make sure that happens — before the next model goes live.

For reprint and licensing requests for this article, click here.
Artificial intelligence Regulation and compliance Bank technology
MORE FROM AMERICAN BANKER