How scammers' use of AI is affecting fintech investment

Mendoza-Adrian
VC investor Adrian Mendoza says bank partners are taking a greater role in working with new tech companies.

Banks and other payment companies are pouring resources into generative artificial intelligence, with an emphasis on firms that can combat fraud and other financial crime.   

Like any anti-fraud effort, it's an arms race with the scammers. Google reported this week that cybercriminals can use generative AI and large language models to aid in social engineering and phishing, and will likely do so in 2024. 

That will create a potential need for new cybersecurity technology that can protect genAI-powered payment and fintech projects, providing a potential venue for investment in an otherwise down market.  

"Generative AI is developing rapidly and challenging firms' cybersecurity functions," said Kristen Jaconi, executive director of the University of Southern California's Marshall's Peter Arkley Institute for Risk Management. 

But AI is only as good as the data available to it.

"Companies need to understand their data better," Jaconi said. "There could be issues with data and sources of data being unknown." 

Generative AI refers to AI that accesses more data to create original content, such as code or sales pitches. Banks and other payment companies are using generative AI for mostly internal uses such as writing IT requests, but are making major investments in anticipation of more sophisticated needs in the future

There are signs that all companies are underreporting or underestimating cybersecurity risk, even before the impact of generative AI is even taken into consideration. Forty percent of Fortune 500 companies said they had not experienced a "material" cybersecurity incident, according to Deloitte and USC's most recent study of cybercrime. Twenty-five percent of companies said the war in Ukraine had amplified cybersecurity risk, and 40% said remote work has increased cybersecurity risk. 

But the report came before a new Securities and Exchange Commission rule that requires public companies to disclose material data breaches and other cybersecurity incidents, though the SEC's definition of "material" is not specifically defined. 

Given that Deloitte and USC were looking at cybercrime reporting between November 2022 and May 2023, and the impact of generative AI on cybercrime is still emerging, the amount of cybersecurity attacks is likely higher, according to Jaconi. "The next reporting season will see many more disclosures," she said.

Looking for skilled startups

Anticipating the heightened risk, Mendoza Ventures, a Boston-based fintech investor, is looking for opportunities that intersect AI and cybersecurity and lead to collaboration with banks. 

"Everybody wants to use genAI, but if the data is decentralized, it's hard to do that," said Adrian Mendoza, founder and general partner of Mendoza Ventures. "It can be hard to change a core banking system to do that, but how can you make the data around the core easier to access?"

Mendoza Ventures just added Truist Ventures (Truist Bank's VC unit) as an investor in Mendoza Ventures' Early Growth Fintech fund as a limited partner. The $100 million fund will invest in early growth-stage startups, focusing on firms with diverse management teams. Truist joins Bank of America and Grasshopper Bank in investing with Mendoza Ventures. 

The investments are part of an increase in collaboration between the banks and portfolio companies, Mendoza said. "Our theory has always been that LPs [limited partners] become participants. We don't want passive LPs, especially now when it's harder to raise funds." 

Bank of America, Grasshopper and Truist did not provide comments for this article.

Mendoza Ventures specializes in firms that develop AI, financial technology and cybersecurity. Banks, VCs and cybersecurity firms are focused on startups that emphasize data integrity, which is a challenge as AI advances and becomes more complicated, Mendoza said.Mendoza Ventures' portfolio includes firms such as Fiverty, which has built an API that plugs into financial institutions to access more than 2 billion data sources to spot payment fraud early. Another firm, Praxidata, develops enterprise uses for generative AI. 

As banks and fintechs increasingly collaborate, the focus on how to fight cybercrime is also changing, according to Mendoza. There will need to be more focus on anti-money-laundering and third-party risk in addition to the identity risk that has dominated most traditional cybersecurity, he said.

"As APIs become more important, cybersecurity will play a critical role," Mendoza said. 

Good AI vs bad

Outward VC, a fintech investor that includes the all-in-one card company Curve in its portfolio, referred questions to eXate, a recent investment. The fintech provides security for data sharing, a key element in real-time payments, embedded finance, open banking and uses for generative AI. 

"You have the bad AI attacking the good AI right now," said Peter Lancos, co-founder of eXate, who has also worked for banks such as HSBC.  "And combating AI fraud is underfunded in a lot of organizations."

Given the large number of AI-related projects and investments, eXate is anticipating a need for improved security that takes the pace of generative AI into consideration, as well as the risks borne from the expanded data sourcing that feeds AI. 

"Everyone you read about is going through a digital transformation program" Lancos said, adding that it creates more APIs than "people know what to do with."

Generative AI can create better data rules and access that can be based on a specific need at a specific time, rather than based on a general job title, according to Lancos. "Historically you have access to an application or you don't. It's binary. GenAI can make that much more detailed, more about 'who can do what right now.'" 

Banks and payment companies will need technology to monitor how users engage with large language models, according to Julien Bonnay, partner and head of U.S. cybersecurity at Capco, adding that such technology is not yet widely available. 

"It can be easy to use genAI to circumvent protections," Bonnay said. "It's easy to mimic people for voice attacks or phishing. We have seen crooks writing malware more easily and inserting it into code that third parties deliver to clients."

For reprint and licensing requests for this article, click here.
Payments Artificial intelligence Cyber security
MORE FROM AMERICAN BANKER