
- What's at Stake: Unchecked supervisory AI threatens market integrity, financial inclusion and public trust.
- Expert Quote: "Supervisory decisions must remain explainable and accountable," requiring a "human in the loop" for all significant interventions. —Marlene Amstad, FINMA
- Supporting Data: Sixty-seven percent of agencies use AI; 37% report no formal governance or ethical framework.
Source: Bullets generated by AI with editorial review
More than two-thirds of the world's financial authorities are betting on artificial intelligence to manage our economies, so how is it possible that more than half of them are doing so
From detecting complex money-laundering patterns to predicting systemic banking shocks, financial authorities — including central banks, securities commissions and market conduct regulators — are increasingly betting on AI. These institutions play a critical role in safeguarding financial stability and protecting citizens' savings. When effective, they can promote inclusion and sustainability by monitoring gender gaps in access to finance or supervising climate-related financial risks. However, when governance lags behind technology, the consequences can include eroded trust, weakened market integrity and unintended harm.
Over the past years, we have watched financial authorities move from paper-based spreadsheets to high-frequency data lakes. To date, 67% of supervisory agencies are deploying, piloting or exploring AI for a diverse range of high-impact use cases. Examples are everywhere. The European Central Bank's
The latest data from the
The statistics are revealing. Over a third of agencies (37%) report having no formal governance or ethical framework for AI in supervision, while only 3% have developed a dedicated internal framework specifically tailored to suptech applications. Furthermore, only 4% align explicitly with international standards like the OECD AI Principles or the EU AI Act, 6% conduct regular ethical audits, and a mere 5% publish transparency reports regarding how AI impacts their supervisory decisions.
Perhaps most striking is the limited recognition of ethical risk as a supervisory challenge. When asked about barriers to deployment, only 8.8% of authorities identify ethical concerns or unintended societal impacts as an issue. Even fewer (8.1%) cite the risk of algorithmic bias or discrimination.
These figures are alarmingly low given the risk of AI amplifying existing inequalities. Risks may be underreported precisely because governance is underdeveloped. Where there are no bias audits, risks remain invisible and easy to dismiss.
Furthermore, 18% of agencies cite "black box" concerns (the inability to explain AI-driven outputs) as a major barrier. As Marlene Amstad, chair of the Swiss Financial Market Supervisory Authority, or FINMA,
The New York bank's technology leaders have been working to reengineer back-office work and give some of it to Anthropic's Claude generative AI model.
Accountability begins far upstream, rooted in data foundations. Among financial authorities, 64% report fragmented or inconsistent data as a key challenge, and this weakness tends to travel downstream into AI-enabled supervisory decisions. Poor-quality, incomplete or unrepresentative data increases the risk of biased or misleading outputs, particularly in areas such as consumer protection, financial inclusion and conduct supervision. This means that ethical failures are often baked in long before a model is trained or deployed.
Strong data governance is therefore a core element of ethical infrastructure. This includes clear data ownership, documentation of data provenance, ongoing quality controls and explicit consideration of who or what may be underrepresented in supervisory datasets. As Bernard Nsengiyumva of the National Bank of Rwanda underscored,
The need for strong foundations becomes even more urgent as financial authorities move toward agentic AI — systems that are increasingly autonomous, goal-driven and capable of acting with limited human intervention. Agentic systems promise efficiency and scale, but they also expand the risk surface. New vulnerabilities such as prompt injection or unintended task execution can undermine supervisory control if safeguards are not built in by design. This requires moving beyond basic tool usage toward algorithmic literacy, equipping supervisors to interrogate model behavior, understand limitations and intervene when outputs conflict with supervisory judgment or public interest objectives.
Several authorities are proving that embedding ethics is possible. The
To close the accountability gap, leaders must move beyond intent and prioritize concrete operational policies that include transparency about how AI is used in supervision and where human judgment remains decisive. This involves translating high-level principles such as security, accuracy, fairness and explainability into measurable code and protocols, including auditability and bias testing, while defining clear liability for when a model or autonomous agent makes an error. Additionally, authorities must undertake ethical impact assessments that examine real supervisory effects and complete training that equips supervisors not just to use AI tools, but to question them — a critical gap given that the State of SupTech Report 2025 found that only 12% of authorities currently mandate training on ethical AI principles for developers and users.
The tipping point for supervisory transformation is no longer the availability of tools, but the governance and trust behind them. We cannot afford to have over 60% of authorities racing toward an AI-driven future while the majority still lack basic accountability frameworks. Deploying these systems without scaffolding is a systemic risk that could lead to discriminatory outcomes, unaddressed market vulnerabilities and a catastrophic loss of public trust that destabilizes the entire economy. If financial authorities are to remain trusted guardians of stability, ethical governance must become core supervisory infrastructure.






