BankThink

Community and regional banks are sleepwalking into AI risk

SleepWalking03192026
Somnambulant, 1878 by Maximilián Pirner, via Wikimedia Commons
  • Key insight: By welcoming AI-driven vendor platforms into their banks, small and midsize institutions are introducing risks that are poorly understood and potentially interrelated. Boards of directors need to start paying attention.
  • What's at stake: Institutions that treat governance as a check-the-box exercise may see short-term efficiency gains while quietly increasing fragility. 
  • Forward look: Institutions that integrate AI oversight into board-level risk and strategy discussions can adjust when models underperform or assumptions break.

AI risk forming inside community and regional banks is no longer speculative. It is already reflected in their balance sheets.

Processing Content

Across the country, institutions with under $10 billion in assets are integrating AI-driven underwriting tools, fraud detection engines, marketing optimization systems and vendor-embedded analytics. Most executives see modernization. Many boards still see technology.

It is neither.

AI is reshaping capital allocation, model risk exposure, third-party concentration and operational dependency. Yet oversight frameworks often remain anchored in a pre-AI view of risk.

Economically, this is unpriced exposure. From a governance standpoint, it is risk without clearly assigned fiduciary ownership.

Regulators eventually focus on the gap between where risk forms and where oversight resides. In AI adoption, that gap is widening.

AI differs from prior technology waves not because it is new, but because it embeds itself in decision authority.

In most community and regional banks, AI is not developed internally. It is embedded in vendor platforms: loan origination systems with predictive underwriting layers, fraud engines that auto-score transactions, marketing systems that determine customer targeting, pricing algorithms that optimize deposit or credit offers. These systems increasingly influence outcomes that affect capital, compliance and customer fairness.

Because the intelligence sits inside third-party software, it is often treated as vendor functionality rather than institutional risk.

That distinction is artificial.

When a vendor's model influences credit exposure, pricing sensitivity, fraud losses, or customer segmentation, the economic consequences accrue to the bank's balance sheet, not the vendor's. Operational responsibility may be outsourced. Fiduciary responsibility is not.

Traditional model risk frameworks were built around internally developed or clearly documented models. Vendor AI systems, especially adaptive or opaque architectures, do not fit neatly into legacy validation templates. Boards may hear that "the vendor has validated the model." Validation of performance is not the same as governance of exposure.

Concentration risk is also forming quietly. Many midsize and community banks rely on overlapping fintech providers for core processing, underwriting analytics or fraud detection. If those vendors deploy similar AI architectures or depend on similar data pipelines, correlated model behavior becomes a systemic vulnerability. This is a network structure issue, not a hypothetical concern.

From a regulatory standpoint, the issue will not be efficiency. It will be whether boards understood the risk implications of adopting AI at scale.

From a governance standpoint, the question is simpler: If AI influences credit, capital, pricing and customer outcomes, how can it remain a secondary agenda item?

At its core, this is fiduciary.

Boards are charged with overseeing risks material to the institution's safety and long-term viability. When AI systems influence credit decisions, pricing structures, customer segmentation, fraud losses or capital deployment, they are no longer tools. They are decision authorities.

In a speech Tuesday, Federal Reserve Gov. Michael Barr said it was possible that artificial intelligence will boost productivity in an undisruptive way. But he said policymakers should also be wary of a financial crash if those gains are not realized or a rapid adoption that could lead to labor displacement.

February 17
Michael Barr

Decision authority carries fiduciary weight.

AI blurs lines of accountability. Management may see AI-enabled platforms as tools. Vendors may describe them as configurable services. But fiduciary responsibility does not diffuse simply because decision logic is embedded in software.

If an AI-driven underwriting model misprices risk, if pricing algorithms erode margin under volatility, or if automated targeting introduces bias or regulatory exposure, boards will not be asked who built the model. They will be asked whether oversight structures were adequate.

That is the fiduciary test.

AI amplifies outcomes. It accelerates both good and bad decisions. Weak assumptions compound faster. Errors propagate more broadly. Risk materializes earlier. Governance frameworks that lag this acceleration create a structural mismatch: Exposure forms faster than oversight adapts.

This makes AI governance not only a compliance issue, but a competitive one.

Institutions that treat governance as a check-the-box exercise may see short-term efficiency gains while quietly increasing fragility. 

Model-driven growth can look compelling on dashboards while eroding margin durability or capital resilience. When liquidity tightens, credit cycles turn or scrutiny intensifies, those weaknesses surface abruptly.

Institutions that integrate AI oversight into board-level risk and strategy discussions gain a different advantage. They understand how growth is generated, how margin is sustained and where dependencies accumulate. They can adjust when models underperform or assumptions break.

AI governance is becoming a survivability differentiator. Governance determines whether AI-driven growth is durable or illusory.

The question is no longer whether to adopt AI. That debate has largely passed. The real question is whether governance structures have evolved as quickly as the technologies influencing institutional outcomes.

AI oversight cannot remain siloed within technology committees or delegated entirely to vendor management. If AI influences credit quality, liquidity strategy, acquisition economics, or pricing resilience, oversight belongs alongside capital planning and stress testing.

Regulators are not anti-innovation. They are risk-sensitive. When supervisory attention turns more directly toward AI, the institutions best positioned to respond will be those that treated governance as strategic rather than procedural.

AI will not destabilize community and regional banks on its own. Poor oversight will.

AI is already inside the institution. The question is whether governance has kept pace.


For reprint and licensing requests for this article, click here.
Artificial intelligence Community banking Regulation and compliance
MORE FROM AMERICAN BANKER