How JPMorganChase democratized employee access to gen AI

JPMorgan
Bloomberg

JPMorganChase was the first big bank to roll out generative AI to almost all of its employees through a portal called LLM Suite. As of mid-May, it's being used by 200,000 people.

"We think that AI has the potential to really deliver amazing scale and efficiency as well as client benefit," Teresa Heitsenrether, chief data and analytics officer, told Bloomberg in May. "It's a real priority for us, and we've been very focused on this for a long time."

The bank, like many others, has used traditional AI and machine learning for years in areas like fraud detection, risk management and marketing.

"But the big surprise really came with generative AI, which really opens up new possibilities for us," Heitsenrether said.

LLM Suite is an abstraction layer through which large language models like OpenAI's GPT-4 are swapped in and out. The models are trained on proprietary JPMorganChase data.

The bank's lawyers use LLM Suite to analyze contracts. Bankers use it to prepare presentations for clients and to generate draft emails and reports.

"What it's doing is really freeing up time and capacity," Heitsenrether said. "On average, we're seeing people gaining an hour or two hours of productivity a week, which is really quite significant. And we're really still in the early stages."

The project is "advanced in scope and ambition," said Alex Jimenez, lead principal strategy consultant at Backbase. "Deploying a proprietary large language model at this scale is an industry-leading move. Unlike others, they aren't just testing but embedding it deep into the daily workflows of bankers, compliance teams, technologists. The real advancement isn't just the tech but the institutional integration."

This project is setting the tone for other banks, he said. "The rollout likely puts pressure on peer banks to accelerate or scale up their own gen AI initiatives. It is influencing vendor roadmaps and internal AI governance discussions across the industry."

The making of LLM Suite

Chief Analytics Officer Derek Waldron and his team began working on LLM Suite two years ago.

"The thought experiment that we went through was, let's say that these models just keep getting better and better and better and better, which they have," Waldron told American Banker. "And continue to imagine that a super intelligent model or AI system shows up on our doorstep. The problem statement, then, is how do you put it to work inside JPMorganChase? We wanted to enable generative AI technology for the firm, but we wanted to do so in a very safe and secure way that made sure that we were able to understand the data lineage, etc. And that was a principal reason why we wanted to build in-house the way we did — for safety and security."

The abstraction layer was created out of the recognition that "in the future, different models will be good for different things, and so we don't want to architect ourselves around one particular model," Waldron said. "Instead, we want to abstract it. So that when employees have a particular application, they have some flexibility in the models they choose so they can pick the best one at hand."

The bank tests and vets new models for safety and security, as well as their applicability to different use cases, before bringing them into its LLM Suite. Some large language models are good at synthesis and reasoning, while others are good at coding or complex document analysis, Waldron said. Small models can be fine-tuned for specific tasks.

Connectivity to the bank's databases and applications was critical to the development of LLM Suite, as was the ability to roll models out to all employees.

"It's one thing to get access to a model via a cloud provider or via API connectivity and enable that for application developers," Waldron said. "It's totally another thing to roll that out to 250,000 employees. So we placed a big, big bet on this democratizing aspect of the technology. We thought that if we put self-service capabilities that are very high quality, consumer grade and connected to JPMorganChase systems, and we make those available to the whole firm at large, we would truly scale AI innovation and adoption in the firm. And that turned out to be correct."

Real-time, accurate data is important for these models to generate useful answers. The bank is gradually connecting its datasets to LLM Suite, including all of its news subscriptions and earnings transcript libraries. "When these get connected and distributed to the whole population, all of a sudden, employees can do things in an automated way that they could never do before," Waldron said. (The bank will still pay for its news subscriptions, but for firmwide access rather than individual accounts.)

"We are very deliberate in the process of laying out what is the authoritative source, and before a data source gets connected into the general ecosystem, we make sure that we've answered the question, is this an authoritative source who is responsible for the integrity, the ownership? Is it contradictory or duplicated, versus other datasets?" Waldron said.

At the same time, however, JPMorganChase is careful not to let any of its data get sucked into public-facing large language models like ChatGPT.

"AI by its technological nature is capable of absorbing anything that it comes across, and so more now than ever, I think institutions at large need to be very, very careful and thoughtful as to where their data goes," Waldron said. "As we built out our systems, we had all the necessary assurances that we know where data is going and how it's treated along the way. And obviously we do not allow third parties to train their models on proprietary JPMorgan data."

This is true even for the prompts employees enter. "Where we are today, so much information is embedded into the prompt," Waldron said.

How JPMorganChase is using AI

The aim is for every employee to have an AI assistant, for every process to leverage AI systems and for every client experience to have an AI component. This will take years.

Generative AI models generally have per-seat licensing costs, which can add up for a bank the size of JPMorganChase.

"That's been one of the roadblocks to widespread adoption, because business leaders naturally are asking the question up front, what's the ROI for that particular person?" Waldron said.

But because JPMorganChase built an internal platform, the only variable cost is compute, he said. If an employee doesn't use it, the bank does not pay for it. "That value proposition turned out to be very desirable to business leaders," Waldron said.

For its overall AI adoption and use of AI, JPMorganChase has been at the top of Evident's AI Index from the scorecard's launch in 2023.

"They started really early," Evident CEO Alexandra Mousavizadeh told American Banker. The bank was able to move quickly because it created data platforms for unstructured and fragmented data and got business leaders looking for use cases, she said.

"You've got to change the mindset of an entire organization, which is no small task," Mousavizadeh said. "What makes them so successful is that they started really early, like back in 2017 they built an AI research lab and formed a clear AI strategy. There was a clear data strategy."

Banks that are just starting this process now "have got a mountain of work to do," she said. "They can get there, but they've got to think quickly."

For reprint and licensing requests for this article, click here.
Innovation of the Year 2025 Artificial intelligence Technology JPMorgan Chase
MORE FROM AMERICAN BANKER