Banks unite to tackle AI risks with cybersecurity guidance

Confident Male Data Scientist Works on Personal Computer Wearing a Headset in Big Infrastructure Control and Monitoring Room. Young Engineer in a Call Center Office Room with Colleagues.
A consortium of banks focused on cybersecurity has published a series of white papers aimed at navigating the challenges and opportunities of AI in the financial sector.
Gorodenkoff Productions/Adobe Stock

A consortium of banks dedicated to cybersecurity released a set of white papers this month outlining the threats, risks and responsible use cases of artificial intelligence within financial services.

The six papers from the nonprofit Financial Services Information Sharing and Analysis Center (FS-ISAC) address topics ranging from the cybersecurity risks associated with AI to how banks can leverage AI in their cyber defenses and what principles banks should consider when creating AI-based tools and applications.

The papers, which FS-ISAC refers to together as a "framework," are designed to accompany resources from other nonprofit organizations and governments that address the same set of problems — namely, managing the risks and exploiting the powers of artificial intelligence in an ethical and secure manner.

FS-ISAC's framework joins a long list of others like it, including an AI risk management framework from the National Institute of Standards and Technology (NIST), secure AI development guidelines developed by the security agencies of multiple nations and a white paper on generative AI development principles from the Association for Computing Machinery.

FS-ISAC's AI framework is a timely development, according to Benjamin Dynkin, executive director at Wells Fargo and chair of FS-ISAC's AI Risk Working Group (the group that created the six white papers), because banks are facing "increased pressure to capitalize on AI integration.

"These papers provide point-in-time guidance on using AI securely, responsibly, and effectively, while offering tangible steps the sector can take to counteract the rising risks associated with AI," Dynkin said.

The six-paper framework serves a vital role in financial institutions' fight against threat actors who have started using generative AI to enhance their attacks, according to Hiranmayi Palanki, principal engineer at American Express and vice chair of FS-ISAC's AI risk working group.

"The multi-faceted nature of AI is both compelling and ever-changing, and the education of the financial services industry on these risks is imperative to the safety of our sector," Palanki said.

Although plenty of documentation already exists, FS-ISAC claims its white papers constitute the first set of standards and guidance curated specifically for the financial services industry. (Some regulators have issued guidance specific to the industry, including the New York Department of Financial Services.)

While the six papers address a wide range of AI's upsides, FS-ISAC highlighted four "priority" threats posed by AI that it recommended financial institutions give special attention: deepfakes, employee use of generative AI, new phishing and business email compromise techniques and improper information use (i.e., proprietary, copyrighted or erroneous information).

FS-ISAC addressed each of these four priority threats in "Combating Threats and Reducing Risks Posed by AI," which outlines AI-related threats and the mitigations banks can employ to combat them. Besides these four priority threats, the paper also covers cybercriminals' own GPT models, data poisoning and several other topics.

Michael Barr

The Federal Reserve vice chair for supervision said advancements in technology could put banks in a "constant struggle" to stay ahead of hackers and other bad actors.

October 5

The five other white papers address the following:

"Adversarial AI Frameworks: Taxonomy, Threat Landscape, and Control Frameworks" focuses on the threats associated with AI in financial services, offering a taxonomy of risks such as hallucinations and prompt injection, and is closely related to NIST's AI risk management framework.

In contrast to outlining the threats of AI, "Building AI Into Cyber Defenses" offers a solutions-oriented look at how banks' cybersecurity teams can automate processes, analyze data and generate reports — among other capacities — using AI.

"Responsible AI Principles" describes the fundamental principles that undergird the rest of the framework. These principles include the safety, security and resilience of AI systems, the explainability and interpretability of the systems and the privacy implications of their use.

"Generative AI Vendor Evaluation and Qualitative Risk Assessment," which is accompanied by a spreadsheet tool, offers banks a rubric for assessing and selecting generative AI vendors. FS-ISAC says the tool can act as a starting point for banks with a comprehensive due diligence process and low risk appetite, or as a complete process for banks with higher risk appetites.

"Framework of Acceptable Use Policy for External Generative AI" helps financial institutions develop an acceptable use policy when incorporating external generative AI into security programs, offering examples of what permissive and stringent policies might entail.

Accompanying the framework and vendor tool, FS-ISAC also released a document titled "Financial Services and AI: Leveraging Opportunities, Managing Risks," which acts as an index for the six white papers.

Among the institutions that contributed to the development of the framework are Wells Fargo, Goldman Sachs, FirstBank, Bank of Hope, NBT Bancorp, MUFG Bank, Ally Financial and nonbanks including Mastercard, American Express and Aflac.

For reprint and licensing requests for this article, click here.
Cyber security Technology Artificial intelligence
MORE FROM AMERICAN BANKER