Busting AI myths with the ETA's Jodie Kelley

There is a lot of confusion surrounding the banking and payments industries' use of generative artificial intelligence, and if these concerns are not cleared up, they could lead to burdensome regulations designed to address a nonexistent problem.
 
At its heart, the issue is whether AI is making decisions that humans should be making — and in doing so, introducing biases that humans are likelier to catch. Some uses of AI have been around for years without controversy, and risk being caught up in the same net that would be meant to target newer implementations.

Jodie Kelley, chief executive of the Electronic Transactions Association, sits with Daniel Wolfe, content director at American Banker, to separate truth from fiction. 

Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.


Daniel Wolfe (00:09):
Welcome to Leaders. I'm Daniel Wolfe with American Banker, and I am here with Jodie Kelley. Jodie, would you mind introducing yourself?

Jodie Kelley (00:16):
Not at all. I'm Jodie Kelley. I am the CEO of ETA. We are the trade association that represents the breadth of the payments industry.

Daniel Wolfe (00:23):
What does ETA stand for?

Jodie Kelley (00:24):
Electronic Transactions Association.

Daniel Wolfe (00:27):
So not estimated time of arrival?

Jodie Kelley (00:28):
It does not, no. OK.

Daniel Wolfe (00:30):
Alright. So we're here to talk today about artificial intelligence and to do a little bit of myth busting about, there's a lot of conversation, what it is, and how people are using it, how people are misusing it. And I was hoping you could give us a sense of when you're having conversations within the industry, what are some of the misconceptions that you're seeing?

Jodie Kelley (00:48):
Yeah, I think a lot of the misconceptions we're seeing, or particularly with the public and with policymakers, there's so much attention right now being paid to generative AI. We all saw the reporting when the New York Times reporter used ChatGPT and was told to leave his wife and follow ChatGPT because they were in love. And that captures the imagination. It's really interesting and fun to follow. But really when you look at the payments industry, the payments industry has been using predictive AI for decades, doing things like fraud prevention, customer service, anything that requires taking a very large body of data and distilling it into patterns or trends from which you can make predictive judgments. As the name suggests, generative AI is very different. Generative AI is used to create something new and it is largely being used for things like coding, things that are internal that drive efficiencies, experimenting with customer service, but not for core functions like decisioning, like making underwriting decisions on lending, for example. And that's where I think the confusion largely lies, and that's one of the myths that I think needs to be busted. Generative AI not being used in ways currently that negatively impact consumers and the industry is very careful as it works its way through experimenting with gen AI and seeing just how it can be used to make the customer experience better and fraud prevention more accurate.

Daniel Wolfe (02:27):
So what are some of the guardrails you're seeing that the companies that are experimenting with gen AI, what do they need to put in place to make sure that they're not introducing bias or allowing AI to make decisions that humans should be making?

Jodie Kelley (02:38):
Yeah, so first I would say the explainability transparency key pillars in the development of generative AI. And as our member companies in the industry experiment with generative AI, they are looking to see, is it making decisions that they can explain that the way it gets there is transparent and are the outcomes the same outcomes that you would expect to see? And that's going to take some time and some work. I just saw this week, there was an article and it was talking about the Gartner Tech hype cycle. And they said right now in generative AI, we are in the trough of disillusionment. And all that that meant was there was so much hype, it's going to solve all of our problems. The world is going to change overnight. And then companies including our own experiment with it and they find out it's a little harder than that.

(03:33)

There's things you got to figure out, and you're certainly not going to deploy it in certain ways without being absolutely confident. Ours is a highly regulated industry, and so you've got to get it right. And so you experience that high of excitement, what you can do with it. You experience a low as you realize this is harder than you thought, and then if you follow the cycle, this will ramp back up and plateau. We will inevitably be able to deploy gen AI in a way that is positive, but over time and in a way that we're confident is delivering good results.

Daniel Wolfe (04:06):
One example that I like to refer to about just the disconnect with what our expectation is with generative AI and what the reality is. I saw this on Twitter recently, some people trying to use ChatGPT to generate code, and with this code what they were doing was they wanted to create a game, guess a number between one and a hundred, and they wrote, ChatGPT wrote the code and it worked. It could run the code itself in the ChatGPT window. But if I were to ask you, a human being, to guess a number between one and a hundred, how many guesses do you think it would take? Maximum?

Jodie Kelley (04:40):
I don't know.

Daniel Wolfe (04:41):
Certainly no more than a hundred. Well. ...

Jodie Kelley (04:43):
Certainly no more than a hundred even. I could figure.

Daniel Wolfe (04:44):
That out. But ChatGPT, it took about 111, I think.

Jodie Kelley (04:48)
Oh my.

Daniel Wolfe (04:49)
It kept guessing some of the same numbers repeatedly and it knew finally when it got it right, but I'm thinking it's so good at one thing and that might give you a little too much confidence. I don't mean to put ChatGPT specifically on the spot. It could be any form of technology, but just the generative AI in general. It's very good at one thing and that maybe makes people connect dots in a certain way. And so I wanted to ask you about these leaps of faith people make or this disconnect. Why is there, do you think, a disconnect between the perception of how payment companies are using AI and the reality of what they're doing and the restraint that they have?

Jodie Kelley (05:27):
Look, I think there's this disconnect because the stories that are being told that consumers and policymakers and others see and read are always the sensational stories. I mean, that's just what captures people's interest and that's what you talk about. It's pretty boring to talk about. We've been using predictive AI for decades, really productively to identify fraud patterns. It's not nearly as exciting as some of the things we hear about gen AI. And so I just think as with so many other things in our industry, we just have to be telling the story, right? Telling it in a clear way, talking about the difference between predictive and generative AI and talking about the possibilities, but just kind of grounding the discussion and reality. It is much more interesting to talk about movies in which the robot takes over the world powered by gen AI than it is to talk about something as mundane as better customer service. But I just think we have to get out there and tell the stories and make clear what we're doing and what the possibilities are, what we may do but aren't doing yet.

Daniel Wolfe (06:35):
So speaking of movies, yes. You ever watch the Jetsons? Yes. There was an episode I saw recently where the humans went on strike. And you remember George Jetson had this AI companion, Rudy, his computer that he would talk to had a face and everything. And the humans were going on strike because they were overworked and they wanted Mr. Spacely, their boss, to hire more people. Now, the AI coworkers heard that and thought, wait a minute, if they hire more humans, they have less use for us. And it was just this weird kind of bizarro version of the conversation we're having today where people feel like they might be replaced by AI, but now in the future, it's the AIs who are worried that people can do their job better than they do. And it ended in this very surreal sort of situation where they're sitting at a conference table and actually having to negotiate the end of the labor strike.

(07:27)

And the humans on the one side have to admit, well, I like working with the AIs because they can do calculations faster than I can. And the AIs are like, well, I like the humans. They fix my bugs and they oil my gears. And I don't think that's literally the situation we're in right now, but I saw it as a very kind of, and this was back in the '80s, I guess, when the Jetsons was made, just this very sort of prophetic sort of thing where a lot of folks will have to come to terms with this is what AI will do, this is what AI won't do. And I guess that's a rambling way of coming to the question. Are there things that AI won't ever do? Do you see things in our industry that are just completely off limits? They have to have that human touch.

Jodie Kelley (08:10):
So first of all, I do think that seems prophetic, but I will say, look, a couple of things that you touched on there. So one of them is labor and the impact of AI on labor. And I think it's absolutely the case that as we evolve with any technology, any technology at all, the types of labor that are needed change, that's going to be true with AI over time. So you won't be jobless, we're not going to be eliminating jobs, but jobs will change.

Daniel Wolfe (08:39):
Me personally, you promise?

Jodie Kelley (08:40):
Well, I can't promise that, but as a general matter, the jobs will change. But again, that's true with any technological development, with the printing press, anything like jobs change, the labor that's needed from human beings changes over time. And that will be true here too. In terms of what is off limit, limits for AI, I would say other than the obvious illegal type of things that we're all, and rightfully, concerned about, I think it's too early to say there's anything that would be off limits. I think it is true that there will certainly be, as in your Jetsons episode, a mix of technology and people, you need to always ensure that however you're getting to decisions you're getting there in a way, again, that's transparent, explainable and that the outcomes are what you would expect and that's going to require human intervention. So I think it's a big open question, what is this technology actually going to deliver? How are we actually going to use it? And I will say as we work with policymakers right now, that's one of the things that we're trying to impress. We just don't know. So guardrails for sure, let's make sure that we are all comfortable with where we're going that we can explain and justify, but let's not make any decisions about what may be that we really aren't yet in a position to make. OK.

Daniel Wolfe (10:07):
So of the pioneers out there, what are some of the more interesting implementations you're seeing of generative AI in the payments industry?

Jodie Kelley (10:16):
So there's a few things that I kind of love about what we're seeing right now in Gen AI, and they seem small, but I think they're important. One example that I've been using quite a bit relates to small businesses. The payments industry is very focused on small business because small business is such a huge percent of the retail community and just drives our economy. And small businesses, they have it rough. They're trying to run a business and find customers and manage supplies and manage labor, and anything that makes their job easier, makes them more likely to succeed by a factor of pretty many actually. And so gen AI is being built into software verticals that help small businesses run their business, including in simple ways like generating marketing email in response to a request generating new images that serve their business in response to a request. So you have small businesses who can take off their plate some of the things they're not good at. These are not people who are by and large marketing experts, but they know they need to market and they know they need help doing it. And this is a simple and easy way to accomplish that task. And it's already being deployed in the market in a really, a way that businesses are responding very positively.

Daniel Wolfe (11:37):
So we were talking about guardrails before. Regulators, of course, they set guardrails. What are you seeing in terms of regulation of AI and how that's going to affect the development of the technology in the payments industry?

Jodie Kelley (11:49):
So there's quite a bit happening. So in Europe, they are ahead, I would say, in terms of adopting regulation and a regulatory framework. And their framework is based on risk, which I think is sensible. And so you regulate differently depending on the risk that the use case that the AI is being put to presents that was just recently adopted. We'll see more rollout as we go, but that is active. They are out in front. As I said here in the U.S., there's a lot of regulatory interest in AI. And again, I think that is prudent and reasonable. This is a technology that has captured the imagination. People are rushing to figure out how to use it. Again, in our industry, I think they're being thoughtful about it. But there is no question that in the same way any tool can be put to good use.

(12:46)

It can also be put to bad use. And in fact, we have seen, and we'll continue to see fraudsters, for example, take advantage of the tool, that's inevitable. And so I think putting guardrails, to use your phrase, putting regulatory framework around AI is important. Not one that is overly prescriptive because you don't want to stifle innovation, but a framework that's thoughtful. And we're seeing a lot of interest in Congress. So there is a bipartisan task force on the House Financial Services Committee that's looking at it. The Senate also has been pulling in leaders in the technology to brief and have discussions with them. We're seeing the FTC involved. And interestingly and not surprisingly, we're seeing a lot of activity in the states. The states are, if there is no kind of federal structure, the states tend to try to fill the gap. And so we're seeing lots and lots of states setting up their own tasks forces or legislating around the use of AI within government.

(13:48)

And so a lot of activity, but nothing quite coalesced yet, but it will, right? This is an issue that is so top of mind and causes so much interest that I am confident we will see movement in the U.S. soon. And again, I think it's needed. When you think about how regulated our industry is, there's a legitimate hesitancy to lean too far into anything unless they know, unless these companies know that what they're doing is within the appropriate regulatory framework. And so we'll see them pushing for some legislation or regulation, and I think we'll see a response.

Daniel Wolfe (14:31):
So particularly in the payments industry, although I think every company has this issue, the attention around generative AI stems largely from now, any consumer can have access to it, either for free through their search engine or paying 20 bucks a month to their favorite AI company. And with that, you have a different type of risk. You have whatever data that is being used to train that AI, whatever people are putting into it as individuals, is now something that is being used to train the AI, or could be somehow the AI could be tricked into revealing it. So is there a consideration within the payments industry around just the training and the awareness of people around what they can and cannot be doing? I know working with banks, of course, people are used to restrictions and constraints and not being able to use copy and paste on their phones and stuff like that. But does this present a new set of concerns?

Jodie Kelley (15:30):
Yeah. Look, I think this technology presents all kinds of interesting twists, I guess I would say on existing issues. So for example, data privacy, which underlies your question. Data privacy is clearly a big issue with respect to generative AI and is made complicated in the United States by the fact that we don't have a federal data privacy regime. It's largely states. A federal privacy regime would be a positive thing. But I don't see that happening and certainly in the short to medium term. And so figuring out within a regulatory construct what those privacy rules of the road have to be is going to be an important part of this discussion. And there's other important parts of the discussion too. And again, existing issues or existing things like intellectual property concerns that get imported into this technology in novel ways. And we're already seeing some of the clashes around who owns the data that is being used to train the models.

(16:38)

And that's going to be something that has to be resolved too. I will say, with respect to training. Absolutely. And I think in addition to training, it really is, and we're seeing this, right, companies carefully structuring the way the models, they either build or choose the environment in which they operate those models. Are you pulling in data from public sources? Are you only pulling in data from sources? You control who has access to the model? And then of course, how are you trained on using the model? So this isn't quite like hopping on your phone and asking one of the AI models to draft an outline of a paper for you if you're in college. I mean, the way these experiments and deployments are being structured, there's a lot of safeguards in place for all the reasons we've been talking about.

Daniel Wolfe (17:35):
OK. Is there anything you're seeing in your role as the head of the ETA that you think gives you a unique perspective on this or anything that people have brought up with you?

Jodie Kelley (17:43):
Well, I think what's exciting about where I sit and where we sit is that we see the breadth of what's happening across the payments industry. And so we see the ways in which card networks and banks and processors and fintechs are all experimenting with AI and some of the interesting use cases to which they're putting it. And I think the thing that's particularly interesting for me is just the number of use cases that people have identified that this could potentially help with, and the number of work streams that our member companies have going with respect to generative AI. I've also been impressed with, again, just how cautiously they're proceeding. I think everyone recognizes the potential upside, and I think everyone recognizes the real downside to getting it wrong. And so I think there is an appropriate caution in terms of approach, but incredible optimism and excitement. I think it's going to be incredibly fun three or four years from now for you and I to sit down again and see where we're then.

Daniel Wolfe (18:50):
Oh yeah. Or even one year from now.

Jodie Kelley (18:52):
Well, absolutely. Yeah,

Daniel Wolfe (18:53):
Absolutely. Any other final thoughts?

Jodie Kelley (18:56):
I think that captures it. I think it is exciting to see from a payments perspective, the ways in which our industry has evolved over not that many years from a very simple industry to one that is complex and dynamic and serving consumers and businesses. And AI is kind of the tip of that spear right now, I think.

Daniel Wolfe (19:19):
Well, thank you so much for your time today.

Jodie Kelley (19:21):
Thank you. It was a real pleasure.