
For as long as artificial intelligence has existed, people have worried about its potential harms. In the past, these misgivings included fears that generative AI models would make mistakes, hallucinate or pick up bias from the data on which they are trained.
But the emerging use of agentic AI — systems designed to operate with autonomy, taking on tasks and making decisions without constant human oversight — brings AI worries to a new level.
Research released by the Infosys Knowledge Institute on Thursday found that 86% of executives believe agentic AI will introduce new risks and compliance issues. And 95% of C-suite and director-level executives say they have experienced AI-related incidents, such as privacy violations, regulatory non-compliance or inaccurate or harmful predictions in the past two years.
In recent weeks, experts have been warning that overreliance on AI, especially generative and agentic AI, can bring harm.
"There's always been a danger in relying too heavily on technology," said Dan Latimore, chief research officer at The Financial Revolutionist. "AI is the latest example, although the dangers can be more extreme."
As banks continue to roll out generative AI to their employee base and start to set up AI agents to autonomously handle tasks, here is a look atthese are some of the latest concerns.
The danger of letting AI make decisions
Trevor Barran, CEO and co-founder of Financier, a fintech that connects small- and medium-sized companies to specialty lenders, told American Banker said he sometimes hears executives say, "the LLM said I should do this," or "I threw this into ChatGPT, and it suggested that this should be our strategy."
Large language models like ChatGPT "are extremely good at having a conversation," he said. "They use the entire corpus of modern human language, from the internet and books and wherever they can get it. We've taught an algorithm how to have a pretty good conversation and how to access a lot of knowledge. Those two things are powerful, and they are disruptive." His company uses LLMs frequently.
But in Barran's view, this is not intelligence.
"If I have something that can carry on a conversation, it's very easy to fall into a romantic idea that there's some kind of thought or intellect that's driving the conclusions that are coming out of this very good conversationalist," he said. "That's where the whole thing falls apart, because that's not how they were designed. The danger is that you have CEOs and business leaders who maybe don't have a technical and computer science background saying, 'Wow, I can talk to a computer, I can ask it anything, and it will have an answer.' And they're taking the answer as something that is thoughtful and intelligent about the problem that they're trying to solve."
A business leader needs to consider all the information at hand and form a prediction or vision. That is intelligence, he said.
LLMs can be aids to original thinking, he said. "If it takes me three seconds to go do research that before it took me 10 days to do, ostensibly, I can spend more of my time coming up with original thinking based on that input," Barran said.
But it's a bad idea to outsource thinking to a generative AI model, he said.
"Think of all of these tools as impossible-to-tire-out college summer interns," Barran said. "They know nothing about your business. They know nothing about what's going on in the market. It's the first time they came out of school. They're in this little bubble. They'll work hard, they'll go research a question you asked and come back with something that sounds good. B, but they can't make any decision that is meaningful to your business. You have to make that decision."
The "fog of more" problem
The use of AI agents can quietly compound error rates in financial workflows, according to Blair Sammons, director of solutions for cloud and AI at Promevo, a consulting firm and Google partner.
"It's logarithmic in how the issues are compounded," Sammons told American Banker. "It's not, I have one issue in this model, I have one issue in this model, now I have two issues. It's, I have one issue in this model, I have one issue in this model, now I have 412 issues. Our brains can't follow how quickly these things go off the rails, simply because of how large they are, how many decisions they make. The techie nerd in me is very excited for the agent to agent protocols that are coming out in lieu of APIs, letting computer systems talk to other computer systems and solve problems … that we're facing has never been more flexible and fluid and is fantastic. That comes with the cost of, even one little screw- up now cascades into potentially massive issues."
The issues could include latency or hallucinations as well as errors. "Somebody makes a change in one of these five coding systems, and all of a sudden they nuke their entire production database," Sammons said. "These things compound so quickly on themselves."
Briana Elsass, head of digital experience and technology for BMO's retail and small-business segments, has been thinking about this as her team has been testing agentic AI.
"That's where I think checks and balances come into play, making sure that you're doing that due diligence around, what's the subset that you're able to pull?" she said. "Are you able to validate it against the data, are you able to compare it to the expected results? Doing that level of ongoing evaluation is going to be critical, because we're going to get a level of familiarity, a level of reliance, and if you're starting to propagate bad data, you're going to have bad results coming out of it."
Data quality, data remediation, agent remediation and agent quality are all critical to using agentic AI effectively, she said.
The people problem
Sometimes CEOs are dazzled by the success stories presented at conferences run by hyperscalers like Amazon Web Services or Google Cloud "without realizing what actually made that project successful was a massive amount of change management and a massive amount of people shifting and a massive amount of people rowing the boat in the same direction to accomplish a similar goal," Sammons said.
Getting people to use advanced AI effectively might be the hardest part, he said.
"Years and years of tech people debt is now kind of hitting the fan," Sammons said. By "tech people debt," he meant not enabling people with the right tools and technologies.
"There have been several studies out about computer literacy, and you would think younger generations would be more computer-literate, and we're finding as they come to the workforce, they're not," he said. "They can use Facebook, they can use Instagram, they can use their phones. B, but give them an Excel spreadsheet, and they lose their mind."
AI models "can't think for you," Sammons said. "You have to have a base-level knowledge that lots of people just don't have."
Rising power costs
Before cloud computing came along, companies' biggest IT problem was storage, Sammons said.
"Storage was really expensive 20 to 30 years ago," he said. "Think about how much physical space a gigabyte took back in the day versus now. Storage was outrageously expensive and hard to do. So IT professionals like me were building data centers that were nothing but hard drives everywhere. Then we solved that problem, and storage is now super-cheap."
Today, with widespread use of resource-guzzling AI models, the big IT limitation is electricity, he said. "We're running out of it. Soon we will no longer be tracking our cloud spend purely on dollars. It will be more about efficiency, and not efficiency defined as how quickly did it take to solve this problem for the least amount of dollars, efficiency in how much power did it take to actually deliver this result?"
Still, Sammons believes all the issues discussed in this article will work themselves out.
"There are concerns we need to be aware of today, but in the very near future, those things are going to sort themselves," he said. "At this point … this technology is too helpful, too powerful, makes people way too much money for it to not get those things solved."