Treasury issues new AI risk tools for banks

Treasury Secretary Bessent Testifies Before House Financial Services Committee
Scott Bessent, U.S. treasury secretary
Graeme Sloan/Bloomberg
  • Key insight: The Treasury released an AI lexicon and a risk management framework specifically tailored to help financial institutions safely adopt AI technology. 
  • What's at stake: Inconsistent terminology and generic guidelines have caused confusion, making it harder for banks to defend against novel cyber threats, bias and compliance hurdles. 
  • Supporting data: The new framework provides institutions with a matrix of 230 control objectives to manage risks across the AI lifecycle.

Overview bullets generated by AI with editorial review

Processing Content

The U.S. Department of the Treasury released two artificial intelligence risk management tools on Thursday to help financial institutions safely adopt the technology, the start of a broader rollout this month of six such resources for the banking industry.

The Artificial Intelligence Executive Oversight Group, a public-private partnership for addressing cybersecurity and operational gaps in how banks use the technology, developed the lexicon of AI-related terms and the finance-targeted AI risk management framework released Thursday.

For U.S. bankers, the sector-specific lexicon and framework provide a road map for navigating the complex opportunities (transforming operations and customer service) and threats (novel cybersecurity vulnerabilities, bias and compliance hurdles) that artificial intelligence presents.

Treasury leaders announced the conclusion of the public-private initiative on Wednesday, marking the completion of work on the tools and the start of their rollout in the coming days.

The government will release the remaining four resources in stages throughout the rest of February. The rest of the suite will cover governance and accountability, data integrity and security, fraud and digital identity, and operational resilience, according to initial statements.

The idea, ultimately, is to strengthen the security of AI infrastructure, promote secure deployment and keep the financial system resilient against sophisticated cyber threats.

"This work demonstrates that government and industry can come together to support secure AI adoption that increases the resilience of our financial system," Treasury Secretary Scott Bessent said in a Wednesday press release.

The groups behind the framework

Two major groups spearheaded the Artificial Intelligence Executive Oversight Group: the Financial Services Sector Coordinating Council, or FSSCC, and the Financial and Banking Information Infrastructure Committee, or FBIIC.

The FSSCC (the private part of the public-private partnership) is an industry-led, nonprofit organization that coordinates critical infrastructure security.

The council comprises more than 70 organizations, including JPMorganChase, Mastercard, the American Council of Life Insurers, the Options Clearing Corp., and the Financial Services Information Sharing and Analysis Center. Deborah Guild of PNC chairs the FSSCC, alongside Vice Chair Heather Hogsett of the Bank Policy Institute.

The FBIIC, which consists of 18 federal and state regulatory organizations, has operated since 9/11 under the President's Working Group on Financial Markets to improve coordination among regulators and enhance sector resiliency.

The FBIIC includes the Federal Deposit Insurance Corp. and the Federal Reserve Board. Treasury's Assistant Secretary for Financial Institutions Luke Pettit chairs the committee.

Treasury and the FSSCC have collaborated to release suites of resources in the past. In July 2024, the two groups published a set of tools outlining effective practices for secure cloud computing adoption.

The two groups also credited the Cyber Risk Institute, or CRI, as a co-author on the framework released Thursday. The institute is a nonprofit coalition of financial institutions and trade associations that seeks to develop and harmonize risk management standards for cybersecurity, technology and AI on behalf of the whole financial services sector.

A lexicon for common definitions of sometimes confusing AI terms

The AI Lexicon released Thursday seeks to establish a common language to help financial institutions and regulators communicate more clearly about artificial intelligence risks and capabilities.

As banks increasingly rely on artificial intelligence for customer service and operational decisions, inconsistent terminology has sowed confusion, harming governance and oversight, according to FSSCC and FBIIC.

To fix this, Treasury and industry partners compiled common technical and risk management terms, drawing definitions from academic publications, government resources and existing standards.

The lexicon serves as an optional tool for U.S. bankers rather than a legally binding document that regulators will use to interpret regulations or contracts.

By getting everyone on the same page, the resource aims to smooth out communication across the legal, technical and business teams that manage bank operations.

"Clear terminology and pragmatic risk management are essential to accelerating AI adoption in financial services," according to Paras Malik, chief artificial intelligence officer at Treasury. She said the lexicon and other resources reduce uncertainty and support consistent implementation for banks.

Risk management framework builds on existing federal guidance

The Financial Services AI Risk Management Framework adapts existing federal guidelines on AI risks, which are generic and abstract enough to apply to any sector, into targeted advice for banks and other financial services companies.

The framework gives institutions tools including a questionnaire to help an institution determine its current AI adoption stage and a matrix of 230 control objectives to manage risks across the technology's lifecycle.

Because the framework categorizes controls by adoption stage, banks do not have to waste resources on controls that do not (yet) apply to their operations.

Before this week, the financial services industry had available the National Institute of Standards and Technology, or NIST, AI Risk Management Framework, released in January 2023, to provide some of the guidance in this area.

Industry groups such as the Financial Services Information Sharing and Analysis Center, or FS-ISAC, have also published white papers on adversarial threats and responsible artificial intelligence principles in the past.

The AI risk framework released Thursday is "an operationalization" of the NIST framework, "specifically tailored for financial services," according to FSSCC.

The new framework translates the high-level principles of the NIST framework into actionable, sector-specific control objectives that organizations can scale to their size.

Josh Magri, CEO of the Cyber Risk Institute, promoted the framework as providing scalable guidance tailored to varying stages of adoption.

"It's an essential resource for both community and multinational institutions alike, empowering them to effectively manage AI risks while driving growth and innovation," Magri said in a Thursday press release from Treasury.

Framework answers specific calls from AI experts

After the Wednesday announcement by Treasury, outside observers highlighted the need for specific controls enumerated in the risk management framework.

"A significant gap in AI governance and security is whether small and midsize firms can reasonably manage third-party risks with AI," said David Brumley, chief AI and science officer at crowdsourced cybersecurity firm Bugcrowd.

The new framework attempts to close this gap by providing scalable guidance that community banks can use, and it dedicates an entire section to establishing third-party risk management processes, specifically mandating due diligence for vendor data practices.

Chris Radkowski, a governance, risk and compliance expert at identity security provider Pathlock, said the guidance ought to "squarely address model integrity risks" because compromised underlying data means compromised downstream decisions. He also called for clear requirements for AI model inventories and human review checkpoints for autonomous systems.

The framework directly incorporates these controls by calling for organizations to build a centralized artificial intelligence inventory, establish data quality and provenance standards, and define strict human oversight roles to prevent unchecked autonomous decisions.

Ram Varadarajan, CEO at cyber deception technology firm Acalvio, said Treasury "must prioritize mitigating adversarial model manipulation, such as data poisoning and prompt injection."

He also recommended that the guidance "mandate real-time behavioral guardrails and automated circuit breakers that disconnect AI agents when their outputs deviate from defined ethical or financial logic boundaries."

The framework meets this requirement by explicitly mandating that financial institutions design system shutdown and deactivation mechanisms for the rapid, controlled disengagement of systems exhibiting inconsistent performance.

For reprint and licensing requests for this article, click here.
Artificial intelligence Risk management Cyber security Technology
MORE FROM AMERICAN BANKER