BankThink

Banks cannot skimp on AI vendor vetting

The recent leadership gyrations at OpenAI and Microsoft are unlikely to have a direct and short-term impact on how banks should consider AI vendors. However, the resolution of the shakeup does indicate a higher commitment from Microsoft and OpenAI to accelerate innovation around generative AI. In this case, it appears the commercializing wing of the gen AI world won out over the incrementalist wing. This implies that Microsoft and OpenAI will continue to develop new large language models (LLMs) at pace, and investment in new AI capabilities will continue to be aggressive.

As generative AI capabilities become available to everyone, banks and other institutions will want to build intelligent solutions to provide revolutionary new capabilities — first with their employees and, over time, for their customers. For example, Microsoft will help enable this with the launch of Microsoft 365 Copilot, which was announced in November and is now generally available.

It is important to note that the use of AI in banking is still in its early stages, and banks should carefully evaluate AI vendors based on their specific needs and requirements. It is important for banks to assess the vendor's own AI governance processes, defined roles and responsibilities for AI development, ethical codes of AI conduct and conformity to AI governance standards.

Banks should also demand evidence substantiating performance claims, understand the source and collection methods of the dataset used to train the underlying models and do extensive internal testing to validate the effectiveness of any AI solution in the banking space.

Lastly, given the capital-intensive nature of generative AI applications, banks should require visibility into vendors' financials. That should include burn rate and the availability of capital to fund required investments.

One revealing element of the OpenAI incident was that the business model for generative AI providers is not clear and current revenues are not even close to covering the massive, fixed costs needed to build and evolve a LLM application like GPT. Traditional countermeasures to solvency risk, such as escrowing of source code, are not likely to be feasible here given the size of the data training sets needed and the dynamic nature of the applications themselves.

Partnering with an expert third-party vendor can help eliminate AI bias and ensure AI fairness and transparency. It can also help banks speed up their time to market and customer growth. As banks think through their AI partnerships, there are a few critical questions to consider:

  • Should banks partner with AI vendors to source out-of-the-box software that may even run in a cloud environment, build proprietary platforms in-house or take a hybrid approach where vendors advise and develop and the bank launches and runs AI solutions? Note that the bank may choose different answers for different solutions, such as in-house for AI/machine learning customer analytics and outsourced for generative AI internal support chatbots.
  • How do banks keep up with regulations that govern AI, especially when evaluating the creditworthiness of customers, the opportunities and challenges of AI giving financial advice and how to ensure that AI-based interactions with customers are consistent with the bank's overall customer experience strategy?

AI is a very fluid and dynamic space, so it makes sense for banks to place multiple bets on different vendors. Today's leaders may well be leapfrogged by others over the next few years as super-capitalized players like Google, Amazon, Apple and IBM develop their capabilities. Banks also need to consider how they manage procurement and vendor management processes internally. The regulatory regime for AI is very unclear globally now and there is little guidance available as to how to manage AI applications with respect to ethics and risk.
Given the highly accessible nature of applications like GPT, if left unmanaged it is likely that many bank employees will start to unofficially experiment with use cases, such as using AI to generate call center scripts or integrating with an AI API to collect public information on customers.

To maintain some control over the situation, banks should have a senior leader in place as the head of AI for the entire firm, supported by a cross-functional team from procurement, IT, legal, risk and compliance and human resources to manage current use cases and proactively set standards and guidelines on how AI will be used at the bank. This team also should have the authority and ability to monitor and shut down any unauthorized uses of generative AI and to approve authorized use cases of generative AI that meet the team's requirements.

Our advice is to not place all eggs in one basket but to partner with multiple vendors, after having engaged with them in a robust screening process. It's essential that banks take a meticulous approach that adapts to changing circumstances if they are going to truly benefit from AI's transformative potential.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Technology Vendor management
MORE FROM AMERICAN BANKER