How banks are using AI for good

Past event date: February 5, 2024 11:00 a.m. ET / 8:00 a.m. PT Available on-demand 45 Minutes
Sponsored by
REGISTER NOW

As generative Al, machine learning and artificial intelligence get deployed in more areas of financial services, there are myriad ways banks can use advanced AI ethically, to offer better services to customers, enhance fraud detection, customer service and financial inclusion, and improve efficiency. Minerva Tantoco, interim CEO of the New York Hall of Science, former chief technology officer and co-founder of Grasshopper Bank and former CTO of New York City, who has served in senior innovation roles at Merrill Lynch and UBS, will share some of what she has learned about using AI to benefit staff and customers.

Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Penny Crosman (00:09):
Welcome to today's Leaders session. How can banks use AI for good and use AI ethically? Minerva Tantoco is the interim CEO of the New York Hall of Science, and she has had many technology leadership roles inside and outside of financial services, including roles as the Chief Technology Officer at Merrill Lynch and at UBS. She was the one of the co-founders and Chief Technology Officer of Grasshopper Bank, a digital bank that got a bank charter. She was a senior product manager at Palm as in Palm Pilot. She was the first chief technology officer for the city of New York. And what am I forgetting? She also holds four US patents on intelligent workflow and she's currently writing a book on ethical ai, a practitioner's guide. Welcome Minerva.

Minerva Tantoco (01:09):
Thank you. Thank you, Penny. Thanks so much for having me. Excited to be here and really appreciate the opportunity to talk about AI for Good in banking.

Penny Crosman (01:19):
Well, we appreciate your coming. So let's talk about you for a minute. How did you even get into technology originally when you were a college student? I guess?

Minerva Tantoco (01:30):
Well, it's kind of a windy road. I went in as a pre-med. I wanted to go into medicine and potentially brain science, and that is how I started to get interested in how does the brain work. And one of my courses was to work on the mainframe doing some statistical analysis. And once I realized that you could actually use computing to study the theory of knowledge and how humans learn, I fell right into AI in the early eighties.

Penny Crosman (02:08):
And then you started your own company while you were still in college. How did you do that?

Minerva Tantoco (02:13):
Well, again, that was a lot of serendipity. I was visiting some friends in Silicon Valley and they were talking about their startup, which was meant to be an expert system for management consultants, sort of an intelligent assistant using a rules engine. And I volunteered that I knew all about that because I was studying AI in college and I ended up staying the summer and the fall to help define the product and started as a knowledge engineer. I did go back for my senior year and finished my thesis, which was on the theory of subjectivity in machines.

Penny Crosman (02:58):
Fascinating. But then you were kind of running a company where a lot of people were twice your age, right?

Minerva Tantoco (03:05):
Yeah. How did that go? It was interesting, of course. I mean, I was in my early twenties, but we had hired some of the best people in Silicon Valley at the time. They were at many of them in their forties. And it was my first leadership experience and it was a way that I learned to work more collaboratively and I saw my role as helping the others be successful as opposed to a top down leadership role. And so I think that really influenced the way I've led ever since then, even in very large organizations, both an entrepreneurial and startup approach as well as a kind of coach leader approach.

Penny Crosman (03:50):
That's a good approach. Now, you were the first chief technology officer for New York City. Did you actually create that role? I know you were the first one to have it.

Minerva Tantoco (04:00):
Yeah, I mean, one of the things I'm fond of saying is I often like to think that I've invented every role I've ever had many times because that role didn't exist before I had it. So I think it's fair to say that when you're in a role that didn't exist before, and whether it's an innovative new role or just a role that the organization felt they needed to create, it is an opportunity to invent that role. And that was true for the city of New York. City of New York had had a chief digital officer and a chief information officer, but they were looking for a chief technology officer to develop a citywide tech strategy, a more strategic approach to how can we help use technology to make the lives of everyday New Yorkers better? And so as part of the interview process, I presented what I thought would be the priorities and vision for that role, and I assumed they agreed with it because they brought me on to do it and I was just coming. That was my first government role. I had been at UBS for four years, but I wanted to bring the knowledge and techniques and approaches to innovating in a large organization to the city I love.

Penny Crosman (05:31):
And what were some of the projects you did there?

Minerva Tantoco (05:34):
So one was to, the very first one was to bring over the finish line, a project called Link NYC, which was to replace payphones with the free wifi kiosks that you see all over the city that had started before me, but we were coming close to the deadline for writing a new contract for that. And it was really, really fun and exciting to be able to bring free wifi to the streets of New York, and I learned a lot in that time. Another was computer science for all, which was to bring computer science education to all New York City public schools, and that was a public private partnership. We also did the Internet of Things guidelines, and it was just the beginning of smart waste baskets and smart traffic lights. And we wanted to make sure that the city agencies who were purchasing the technology were able to appropriately assess the security policy, the privacy policy, the maintenance and operations policy for technology since many agencies were now buying this kind of technology for the first time. I think overall it was really driven by a desire to make New York City the best smart city in the world, and also to address the potential for a tech divide, meaning making it available for all New Yorkers and not just a few. And in 2016, New York City was named best Smart city by the global smart city World Congress in Barcelona that year. So I'm pretty proud of what we did.

Penny Crosman (07:23):
Congratulations on that. So you were one of the founders and the first chief technology officer at Grasshopper Bank as well. What interested you in helping to start a new digital bank?

Minerva Tantoco (07:39):
Well, that's a great question. I was approached with the question, if you could build a bank from scratch, how would you do it? And now having been in financial services and implementing technology roadmaps and improving on legacy systems and creating new lines of business with online banking, the opportunity to build something from scratch was just too juicy to pass up. I had already in my mind how I would've done it differently each time I had to sort of change something that existed. I always thought to myself, I would've done this differently had I known what the future would look like. So that opportunity was fantastic. And so I couldn't resist or give the opportunity to build a bank from scratch and design it so that it could be the bank for the future of commercial banking. And that was really focused on customer-centric data-driven, driven, and I would say for digital natives. And so now doing well, grasshopper Bank, I'm very proud of it. And speaking of digital natives, I think we're now entering the phase of AI natives. Children being born now won't know what it was like to not have AI in their everyday lives in the same way that digital natives were born into the internet age. So it's something important to think about.

Penny Crosman (09:18):
That's a good point. So you created a predictive system that you call precogs. Can you tell us about that?

Minerva Tantoco (09:25):
That is a joke, an internal engineering joke. So one of the things that I puzzled over was a lot of the compliance and fraud checks are done after the fact, right? So you would have either rooms full of people pouring over transactions or your systems would be looking over previous transactions, sometimes 30, 60, 90 days in arrears to look for suspicious behavior, unauthorized wires or risk. From a risk perspective, it would violate the covenant. And so given the opportunity to build something from scratch, could you create a system that monitored the transactions but could predict before something bad happened? So in reference to the movie Minority Report, we jokingly called it the Precog system because it basically could look at a crime before it happened as it were.

Penny Crosman (10:31):
Well then that's fascinating because then there's that danger of falsely accusing somebody is not about that is a movie too.

Minerva Tantoco (10:39):
Yes, exactly.

Penny Crosman (10:40):
So how do you build something like that and make sure that you don't get an innocent person caught up in

Minerva Tantoco (10:48):
The loop? Yeah, well, I think the important design in that was the human in the loop design, which is to say the computer can detect a potential fraudulent credit card and transaction, but nowadays it will alert you of that and say, is this, did you happen to buy that item in another country? And do you have the opportunity to say, no, that wasn't me. Or Yes, I'm traveling and I just needed to buy some socks in some weird place that I've never been before. And so I think the same principle was in the design of this, which is really how can these rules systems help compliance officers help the risk officers do their job more efficiently? And so they're the ones that ultimately are accountable for the decision is this person actually that person or someone with the same name who may be a politically exposed person, the BSA rules, all that still happens, but it automates a lot of the really time consuming labor intensive work that is more easily done by machine, but the decision should always be with the human responsible for that

Penny Crosman (12:15):
Ideally. Yes. So that's a good segue. So the core of our topic today is using AI for good. And so to start with, what does using AI for good mean to you?

Minerva Tantoco (12:30):
Again, it's a great question because AI for good or ethical AI is really using a lot of different contexts, but for me, it really is part of the dialogue around how do we make AI safe more accurate, and how do we make sure that AI has guardrails around potential harms? It's sort of the opposite of AI for bad, I guess, which is not the same thing as some other interpretations might be. Ethics implies that we have a common definition of what is ethical. So I really think about AI for good as using it safely for the right purpose and protecting people from the harms of it. And so human in the loop is an example of that. In last week, we saw what some of the, now that social media is 20 years old now we can see what the impact of not necessarily thinking ahead of time what the potential harms are and putting those safeguards in can have. And so you had a big hearing last week about looking back on the things we could have done differently with social media. I'd like us to take that knowledge and use that with ai. Now,

Penny Crosman (13:54):
What are the kinds of harms that you fear or worry about the most, especially related to financial services?

Minerva Tantoco (14:02):
Yeah, I mean, they're the classic ones, false positives and false negatives. So if you're predicting something, what will be the response? If you have a predictive value or of accuracy of 90% or 99%, that's pretty good, but what do you do about the other 10% where you're potentially making mistake and having a plan around that and having a way to feed back the input on that. It's not new. I mean, algorithms have been in financial services for a very, very long time in terms of credit ratings and underwriting and many, many things. I'd say the first AI scientists were probably the insurance actuarials who decided how much life insurance you should pay, right, based on their knowledge of statistics at the time. So part of the guardrails is around what you do with those decisions and how do you guard against the potential and the known potential for incorrect predictions.

(15:13)

Another way to guard against it, of course, the data itself. So if you use incomplete or unbalanced data, how will if the predictions that your algorithms came out with are correct. So if you only do credit scores based on one population, they may not apply to another population. The good news is there's a lot of really great data balancing algorithms as well, and you can experiment with them and see which ones give you the results you're looking for. And I think that's important too. Really thinking about two factors. One, what are you optimizing for? You may be using a proxy that's inappropriate for that particular decision, so you might be using zip code when in fact maybe median income is a better thing to optimize for. Another is how do you deploy the tool so that the people using it are actually trained to use the algorithms correctly and know that they're the feedback system for a way or a way to correct the system itself.

Penny Crosman (16:38):
I guess that's key. The testing, the fine tuning, okay, this isn't working that well. What do we reverse engineering the model to say what do we need to change? Or what new data do we need to bring in? And that sort of thing. So overall, what do you see as the best and brightest use cases for AI in banking, whether it's generative AI or more traditional machine learning algorithms?

Minerva Tantoco (17:05):
Yeah, I think per my previous precog approach, I think there's a huge opportunity for the use of AI and machine learning, specifically in helping compliance officers and regulators with the massive amounts of transactions they need to deal with every day. It's not the same as it was 30 years ago, and there are online transactions, and I think this regulatory tech or compliance tech is a great opportunity for a verticalized approach to ai. Another one would be in the arena of customer service or advisor services. Now, robo-advisors have been around a long time. They're constantly improving, and this can help the consumer a great deal as well, especially because one of the things that generative AI can provide is a natural language interface to big stores of knowledge. So you don't have to know the right search term, it can answer you like a human being sometimes incorrectly, but it's a way to query a large knowledge base, but using a language that you are comfortable using.

(18:34)

And so to me, LLMs are really a great advance in computers human interaction. Another of course is we talked about fraud detection. That's just getting better and better and bad guys are using AI too, so we need the good guys using ai. And in terms of detecting and preventing fraud or warning, warning you about things like that, I'm sure there will be technology that should be able to detect deep fakes and false videos and things like that. As soon as we develop an ability, then the bad guys will use it. Certainly internally, operational efficiency, we've done a lot of business process improvements, but that can be very much given a predictive quality to it based on that, in my experience, you can look at your loan book for example, and incorporate news items and other trends to say, oh, do you have some concentration risk in this particular industry? You can have a constant analytics report based on external information that can gauge whether you're in a high risk, low or low risk position. So there's so many opportunities to use AI as a tool to help make, actually helped make banking more stable and secure.

Penny Crosman (20:22):
Well, when you were talking about the idea of a customer interface, you said the answer is sometimes incorrect, which is kind of a big problem, especially with large language models. They do have that problem with hallucination or pulling data from a source that really isn't applicable or a piece of data that's either plagiarism or not suitable for the occasion. And we're seeing banks be pretty cautious about using a large language model and putting it in front of their customers. Do you think they'll get to the point where banks could give customers a chat GPT like chat bot to work with?

Minerva Tantoco (21:12):
I think perhaps in the long term, but in the short term, what I'll call special purpose LLMs are really the next phase here. We're going to have the data which has already been cleared and approved, not just scraped across the whole internet. So that is the first step. Use a data source that either you own or that you paid for and then build the model on that. I think that's going to be really key. And then these verticalized uses will be a way to make the interaction more human, I think. So some of the hallucinations just come from, it's a reflection of the data that went in. And so if you put better data in and restricted and own your own data, which some companies are doing, then you can improve on the hallucination part. And so I think the other applications I've seen, which is almost like let's feed in say consumer disclosure rules, then could you say, Hey, have a look at this marketing copy and tell me if it's compliant or not. So it's almost like a grammarly for compliance. And so there are some very interesting developments in that area as well, what I'll call single purpose or verticalized applications of the technology itself, not sort of the broad technology. I think what we have today is a very interesting experiment to see how it is. But you're right, it's not ready for commercial use unless you can confirm that your sources of data you're allowed to use.

Penny Crosman (23:00):
You mentioned when using AI in compliance or fraud that you need to have that human in the loop to look over the result or even loan decisions, you need to have a human review and make the final decision. But what about when you have a large bank with millions of customers and they simply don't have the staff to review every decision of, we're going to open this account, we're going to close that account due to these signals that we're looking at in ai. Is there a way to make AI itself the ethical overseer, the conscience of a system?

Minerva Tantoco (23:44):
Well, that's a really great question, and I think we kind of use algorithms today in that context. Part of it is because for example, if you suspect suspect money laundering, you're not supposed to tell the owner of that account, Hey, we think you're money launder, right?

Penny Crosman (24:08):
Because they might actually be money launder.

Minerva Tantoco (24:11):
So it's a tricky space, right? Because then you want the system to automatically at least freeze that account while investigation takes place. And you're right, it's like what do you do with microtransactions or thousands of transactions just under $10,000 and all that stuff? It's a tricky problem. But in the end, if there is an automated, say, closure of an account, there absolutely should be a way to first assess whether that was the correct decision or not and provide that feedback to that system because over time you might find that your, you're disproportionately affecting a certain group of people and are all foreign accounts or something like that. And so you really do have to have a far more sophisticated level. So maybe there's another level up from that, which is it makes the initial decision and then you might need another system that actually checks that decision before it goes to the human. But you're right, it's a lot going on. It's

Penny Crosman (25:29):
Interesting. So I know when we talked before, you said that being a mom made you a better techie. Can you explain how that happened?

Minerva Tantoco (25:40):
Yeah. My very first CTO job, I got about 10 years into my career. It was late 97, and my daughter was born in 1998. But one of the things that it did, and there were no other pregnant female CTOs at the time that I knew of, but what I wanted to do was to find a way to be able to effectively do my job. Now, back then you had to be in the office on your office computer to get email. And one of the opportunities I had was to think about a way to put your email on the phone. I also helped develop some of the original applications for the pom, which not only allowed you to use your email, but also control servers and things like that. And that meant that I could be available to my team, manage my team, and all the while pushing a stroller on a Saturday afternoon, which I wanted to spend with my kid. So I think that it became a necessity to figure it out, and necessity is the mother of invention. So that really helped drive me to figure out ways to work remotely. And today it's just a given that you can be available and work from anywhere. And I think this has been a great equalizer, especially in the tech world.

Penny Crosman (27:15):
You think it's actually helped women in a way because that way they can balance their responsibilities

Minerva Tantoco (27:22):
Of home. Yes, it gave much more flexibility, and it's not just women, but yes, lots of people need flexibility to be able to care for their kids or take care of other folks. And as we've seen now post pandemic, I mean that was 25 years ago. It's now part of everyone's productivity to have the ability to check on things even when they're at home.

Penny Crosman (27:50):
When you think about being a technology leader, what are some of the qualities you think are necessary for someone to say be a chief AI officer at a large company? What kinds of character traits do they need to have?

Minerva Tantoco (28:08):
Yeah, I mean, the number one is of course a kind of a combination of curiosity and fearlessness. It's not always encouraged actually, and especially in highly regulated firms. And so I saw it as a challenge. How do you innovate responsibly in a highly regulated environment? I saw the additional constraints as kind of a challenge, which is how can we try new things and yet manage the risk of those new things? And so to me, it's really being a practical innovator. What's doable, what will have some real positive impact on either productivity or accuracy or speed? And then kind of go from there. And one of the things that I've always been a fan of is prototyping, trying things out in a small scale before going to a large scale. And that was something that I brought with me say to the city of New York, which tends to just do things on a large scale from the get go.

(29:21)

That controlled experimentation is sort of the scientist in me, sorry, let's prove this out first and then let's try it in a bigger scale. I also really was fortunate to have some wonderful, two wonderful mentors throughout my career who really gave me the air cover to try new things and come up with ideas that weren't and really try them out. And so I try to do the same now where I see someone who's really trying to think of new ways of doing things and giving them the space and the opportunity to prove it out one way or another, and to manage that risk is to really help them with that. The only innovation will get us to the next phase.

Penny Crosman (30:15):
Do you think companies like banks need to have someone with that chief AI officer title or something comparable and then a whole kind of staff focused on ai? Or do you think it can be blended into other areas of technology?

Minerva Tantoco (30:33):
Another good question. I think it's going to evolve similar to the way E was initially in one small RD department, but now you don't even have to say E anymore, right? It's all electronic. And so I think it will start with that same concept of a small group that has both legal counsel in IT, compliance and regulatory expertise, as well as the technology expertise that may be multiple people or one or two people looking over and helping to develop alongside the business strategy folks, the corporate strategy folks, how best to use AI over time that will become embedded. So embedded compliance or embedded communications where, Hey, isn't it time you contacted that client again, here's a sample email, go ahead and do that. Right? So I think that will eventually come, but initially it probably makes a lot of sense to make sure that you don't make a misstep in some applications and that you're really just trying things out and experimenting first

Penny Crosman (31:52):
So banks, as you know, tend to be rather risk averse, and they also tend to be matrixed. So I feel like in a position of leading ai, there must be a lot of headwinds where whoever's in that role comes up against bureaucratic situations, risk averseness worries about what will our regulators say? What if something goes wrong? What are all the dangers? And maybe not move quickly enough due to all of those factors. Do you think it's important at this time for banks to really push through all of that and make sure that they stay inventive and innovative when it comes to AI and not sort of sit back and wait for other people to figure it out?

Minerva Tantoco (32:49):
Well, I'm not promoting innovation for innovation's sake, I think is absolutely critical that every bank and financial services organization be clear-eyed on what problem are you trying to solve? If that problem, let's say the compliance team is overwhelmed or the regulatory reports are too big, then that business case should be compelling enough to try a new approach to it. And so always start with the problem of which AI may be the solution.

Penny Crosman (33:30):
Is there any area in financial services where you would say, don't use ai. It's too risky, it's too unproven.

Minerva Tantoco (33:41):
I mean, it's not so much a specific area, but it's just if there's a potential that you may come out with outcomes that are not appropriate, then I would pull back on that. And so it's not so specific. I've seen outside of financial services, often people are gone in and maybe rushed to use a new technology where it has real medical impact or sentencing was longer than was supposed to. So I do think that in all those cases, to be very aware of what could go wrong and then be prepared for that doesn't mean don't do it, but don't go in unprepared.

Penny Crosman (34:37):
What about facial recognition? I mean, obviously some cities we're working with Clearview, Clearview AI to let policemen match go through photos that had been taken just randomly off the internet and look for suspects with the use of their matching engine, and that was rather controversial. What do you think about using facial biometrics to identify people?

Minerva Tantoco (35:09):
Well, again, it's really about the data and the training. I know that it is a well-known and published fact that many facial recognition technologies are more likely to make a mistake with people of color and women. Part of it's due to lighting, part of it's due to the algorithms themselves. And a big part of it is the data sample's not big enough. Interestingly, humans make the same mistake. They won't recognize, they're more likely to make an error in identifying someone that's not of their own race, for example. So humans are reflected in this AI as well. How many photos? The people that look very similar, women that look very similar, but would be, one might be the president of a company and one might be a criminal, and yet it might mistake one for the other. And so that's why it's so controversial because of the high stakes of the error. So NIST has put out a risk-based AI risk framework, which helps to provide at least some guidance around based on your potential risk. That is the amount you have to do it. If I make a mistake and in a shopping site and recommend the wrong jacket, that's a much different level of stakes than mistaking you for a international criminal.

Penny Crosman (36:42):
Well, yeah, and that's a great point because the same could be true for giving someone a mortgage or not that could have a big impact on their life, or even giving someone a bank account or not. Just the data is so important. What could a bank do to ensure they have the most complete, most accurate, most up-to-date data possible? Do you see it more as a matter of working with the right third party partners having the right data lakes, or is there anything you've learned from your work that you could say about getting the data right?

Minerva Tantoco (37:24):
Yeah, I think there's a couple of great examples. So there are third parties that have focused specifically on the accuracy question. And so you have companies like Fair Play AI that are looking at the outcomes on mortgage decisions you have. So cure looking at using and fine tuning algorithms to identify identity fraud. And so those I think are some lessons learned from that, which is they've over time really looked and fine tuned their data and actually figured out how to balance that data. In other cases, you can use your own data and you can use the bank's own data and use algorithms to verify that. I remember when we were testing a lot of things, we had to use algorithms to create synthetic data. We didn't want to use people's real data. And so there are many, many ways to use algorithms to help do the job every day.

Penny Crosman (38:43):
And I guess last thing, I think if you look at the employee base of the average bank, it's not full of AI experts. What do you think banks could or should be doing to keep all their employees up to speed on new advances in ai, new uses for IT safeguards? How do you think they can bring the whole employee base up to become more AI experts and users?

Minerva Tantoco (39:21):
What's really important there is training. And again, looking at the last few ways of technology, what we've seen since the release of chat GPT-3 and four is the consumerization of ai. And that means you don't longer need to be a programmer to use ai. You can just write a question. And in a similar way that there was a big waves of social media and before that internet, which was only in universities and the military before it became consumerized. And so how banks responded was to make sure that their employees were trained in how to use these technologies appropriately, what to do if it did something wrong, and in fact had policies around it. So what we're going to see are not only federal, international, state and city regulations, but the banks themselves just start thinking about what are their policies for the use of AI and how do we train our folks to use AI more responsibly?

Penny Crosman (40:30):
That makes sense. Well, Minerva Tantoco, thank you so much for joining us today. And to all of you listening, thanks so much for tuning in. I hope you found this useful and I hope you have a great day.

Speakers
  • Penny Crosman
    Executive Editor
    American Banker
    (Host)
  • Minerva Tantoco
    Minerva Tantoco
    Interim CEO
    New York Hall of Science
    (Guest)