Banks spent 2025 opening their wallets for AI. When asked this spring to rate how fluent their workforces have actually become, most landed on "moderate." The word is doing a lot of work; the majority of banks don't actually have a way to measure this.
In American Banker's AI Talent Shift survey of 206 banking professionals, conducted in March, 48% rated their organization's current AI literacy as moderate. Just 16% called it "high" or "very high." Meanwhile, a full third (33%) said their organization's AI fluency was low or very low.
The picture was marginally rosier when respondents graded their own department: 24% rated their AI literacy high or very high, only 23% called it low, and the "very low" share collapsed from 9% at the enterprise level to 3%.
On average, most respondents said their team was better than average.
The apparent grade inflation may suggest bank leaders need to ask a question of themselves: How are they measuring AI literacy?
When American Banker asked survey respondents, many said that essentially no one is keeping score.
"We don't measure this. It's an educated guess," wrote a manager at a sub-$5 billion credit union.
"Still a work in progress that doesn't have a centralized approach," said a senior executive at a bank with more than $250 billion in assets.
Others described the state of play as "ad hoc," "purely qualitatively," "subjective observation," and "nothing definitive yet."
Size correlates with confidence. National banks with more than $100 billion in assets rated their organization "high" or "very high" in AI fluency 26% of the time, compared with 4% at community banks under $10 billion.
Bigger balance sheets buy bigger training budgets, centralized AI governance and specialists who can focus on identifying what AI fluency means and who at the company actually has it.
For everyone else, AI fluency remains a gut-check. Leaders know AI has landed in the workforce, but they can't quite tell you how well it's been absorbed.
That matters because fluency in tools (rather than mere access to them) is what separates AI that generates productivity from AI that generates shelfware (software that gets licensed, deployed, then ignored). Right now, a lot of the industry can't benchmark either.











