Podcast

Can banks beat the 80/20 rule for generative AI?

Sponsored by
Sid Khosla, EY

Transcription:

Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Penny Crosman (00:04):

Welcome to the American Banker Podcast. I'm Penny Crosman. Banks have been ramping up their investment in generative AI with large banks like JPMorganChase and Morgan Stanley, making it available to all employees and hoping for large productivity gains thereby, but Sid Khosla, EY America's banking and capital markets leader, takes a sober view and predicts that, within two years, 80% of the value of the technology will come from only 20% of the use cases. Welcome, Sid.

Sid Khosla (00:33):

Thanks for having me, Penny.

Penny Crosman (00:34):

Thanks for coming. So, tell me about why you think the 80/20 rule applies or will apply in this instance. What were you seeing in the field or elsewhere that brought you to that conclusion?

Sid Khosla (00:52):

Any technology, and I say this often, all technologies go through what I call is their Darwinian journey, right? You start off with looking at a lot of broad applications of that technology. AI or generative AI or agentic isn't any different from that regard, but as you progress and you learn through both the use cases and data insights that's generating, it becomes very clear that there's a specific use for technologies and capabilities like AI and generative AI, especially as it relates to language understanding, content creation, complex data synthesis and engaging with clients. But then there's always a complementary connectivity to other technologies, whether it's machine learning or rules-based systems that, at least in the short term, are better suited due to their position, their interpretability, or their real-time decision-making capabilities. So I think we're all on this journey. Who knows if it's going to be 20%, 30% or 40%, but the fact is we're going to start with a broad funnel and everybody's going to try that out.

(02:16):

And as we progress over the next couple of years, we're going to find where most of the value, especially as it relates to generative AI, lies. What I also think is going to happen is as this unravels, institutions will also realize that they need to be clear about where competitive advantage lies for each institution, and that's also going to drive where they want to think about AI as far as the use cases are concerned, and where they would lean in into other parts of the ecosystem within the financial services space to tap into those broader firms. So the 20%, 30% of the Pareto is more related to what, in that sense, is more related to what's going to be core-competitive advantage for you versus what are the other areas you'd lean on the ecosystem or third parties to do that?

Penny Crosman (03:15):

With what you're seeing so far, what do you see as some of the higher-return use cases or maybe future use cases?

Sid Khosla (03:25):

I think the answer is different based on current and future. The start of all the initial use cases and experimentation has been more around operational — operational efficiency, areas of customer service, which are giving lift. But longer term, I see those becoming much more commoditized versions of the leverage of AI or agentic. I think the longer-term value, especially for financial services institutions, is going to lie much more around customer engagement and growth as opposed to purely efficiency. I mean, efficiencies will always be part of the play, but I think that the hypersize growth is going to be around things like personalized offers. If you think about wealth management firms and the ability to give adaptive financial advice, think about corporate banking, you have much more dynamic pricing, custom lending offers, and even from a B2B standpoint, dynamic investment portfolios and things like that. I think that's going to be the real value drivers longer term, especially as it relates to growth.

Penny Crosman (04:43):

So things that could bring in revenue potentially or strengthen a customer relationship. It's interesting you say that. I feel like I see a lot of emphasis on efficiency. If we let people use a large language model to draft emails, they'll save X hours per week. Or if software developers use GitHub, they're saving X number of hours a week or a month. Do you think that's the wrong way to look at it?

Sid Khosla (05:12):

I don't think that's the wrong way to look at it. Those are the use cases that may be important in the short term, but after a point, those use cases, like I said, would become a lot more commoditized. And the question is going to be is that where the financial institution sees their own kind of competitive advantage? Which is why I keep going back to the point around this Darwinian journey. I think we're going to attack the places where current short-term value creation is, which I think makes a lot of sense, but longer term where you'd want to lean in with technologies like IO, the real impact of generative or even AGI when it comes to being is where your institution's competitive advantage is. And that has to be around areas of growth, areas of engagement with clients, areas of connectivity of products in cross-selling, and much more personalized way of engaging your clients.

Penny Crosman (06:11):

Certainly personalization has been a holy grail for decades for banks, and it seems like that always comes down to having the right access to the right information that's up to date and where you're really able to aggregate transaction and communications and so forth across many different applications and areas, lines of business, et cetera. Do you think, with generative AI, banks are finally going to be able to get to a point where they truly have useful personalization?

Sid Khosla (06:53):

I'm very excited about it. And between clarity around where the banks want to play, the kind of engagement they want to have with customers, and the ability and the capability that AI and agentic would offer, yeah, I think it's going to give us all the best shot of how to make this hyper-personal. And we've seen that in industries outside financial services, so I think there's a lot to bend upon there.

Penny Crosman (07:24):

Do you think there are areas where generative AI is not a good idea and that maybe a rules-based system or a more traditional machine learning system would be more appropriate?

Sid Khosla (07:37):

I don't think it's an either/or here. I think this is the question about how to use complementary capabilities and technologies for the best creation of value for the institution. In the short term, I do think machine learning or neural networks or any other rule-based systems are better suited for tasks and use cases that need precision, that need better interpretability or real-time decision making. And generative AI is particularly well-suited for use cases that involve, like I said earlier, language understanding, content creation, scraping information, complex data analysis and things like that. If you take an example of a classic institution and there's going to be context awareness being brought in through AI, but to think about regulatory compliance or reporting or even transaction monitoring, eligibility checking, onboarding of clients, there's a significant amount of legal and regulatory requirements that need to be interpreted as a very rule-based system in that way. And then it also needs a significant amount of rule-based logic to make predictable decisions around anti-money laundering, for example. So I do see there a more rules-based system being a significant advantage in addition to the context awareness that an AI would bring in.

Penny Crosman (09:22):

A lot of banks, I think, have hopes for using generative AI in their customer chatbots to generate really useful answers to questions and then eventually add agentic AI so that when somebody asks for some kind of transaction to happen, the agentic AI can just take care of it and a human wouldn't have to be involved. But I think there's still some hesitation to put a generative AI-based chatbot in front of consumers because of the chance of an error or hallucination or misinformation or outdated information that could be put out there to customers. Do you think banks are right to be cautious in this particular use case?

Sid Khosla (10:18):

It's a great question, Penny. I think the banks and all institutions are right to be cautious around that because again, these are early days on the maturity of the technology and there are a few things going on there. One you've mentioned, which is all around risk and liability, could be around inaccurate information and advice, and that, especially in this business or in this sector, could lead to serious financial consequences for both clients as well as for the financial institution, from a legal-liability standpoint. I do think there's still work to do on data security and privacy, especially as we largely deal with large language model industrialized across sectors and not necessarily small language models that are very specific to the institution itself. So I think those concerns remain. A big inhibitor in acceptance is also around customer trust. You and I have been in situations where we prefer having a human interaction versus a chatbot, let alone an AI-driven chatbot.

(11:36):

So I think that's a journey we're all going to be on. There is this perception of a lack of empathy and emotional intelligence that customers may feel related to the chatbots, and that's creating some inhibitors. So I don't think longer term that's going to be an issue. And then the last thing I'd say is, especially from a regulatory uncertainty and compliance standpoint, there aren't very clear regulations around how to leverage AI in banking, especially as it relates to the regulatory framework as well as how interactions with clients need to happen. So I think that explainability and transparency are going to be important, especially as we progress the maturity of the regulation around it. So I'd say those are the two or three things that are clear inhibitors here. Though, again, I think it's going to be a journey, and we are going to see some significant increase in both the deployment and acceptance of AI in customer interactions.

Penny Crosman (12:41):

So you have mentioned in the past that the banks' ability to use their proprietary data will give them a competitive advantage, and I think particularly using proprietary data in their generative AI models. What are some examples of this? Does this go back to the personalization you were talking about a little earlier?

Sid Khosla (13:03):

Yeah, sure. I think personalization is a good example of this but, see, the heart of this is the use cases are important, but what the use cases really derive is insights based on the underlying data. So I think what is presented for institutions, which could be a really strategic advantage, is the ability to use something like AI or generative AI to bring together proprietary information that the banks may have and then publicly available information to create much better decision making for them. And this could manifest in different areas, Penny. Risk management is an important area and especially if you combine publicly available information, proprietary information you have about customers, plus like we were talking about the good rule-based engine, fraud detection can get really, really precise, especially as you analyze transaction patterns and customer behavior and prevent problem activities there. I think credit risk assessments are a really good example where both proprietary information and the aggregate data that institutions may have about behavior could marry really well with the ability of AI and combined with rules-based engines to get better insight.

(14:32):

You talked about personalized offers and product recommendations. We're seeing some really exciting stuff around custom lending offers or even optimized loan structures as institutions engage with clients, leveraging a much more analytical approach and very proprietary data around these kinds of products. And then you can't really talk about proprietary data if you don't talk about operational efficiency that that could create as well. The two or three examples I've seen significantly being talked about in the last, I want to say, few months is optimized loan origination as a great use case where the loan application process and key processing times have been reduced just based on leveraging data analytics, [such as] have you focused on where most of the risk is versus not.

Penny Crosman (15:33):

Well, you talked about using this technology to make better decisions, and I was thinking that that's a hard thing to translate into a return on investment. Do you think that, how can banks look at that kind of thing where they're using generative AI in the background and maybe their decisions are more precise, better weighted, but they're not necessarily generating revenue or getting any sort of direct cost savings out of that? How do they make the case for that kind of use?

Sid Khosla (16:18):

Yeah, diet just existed for a very long time and everybody overestimates the value of a technology in the short term and then severely underestimates the impact it may have longer term. And I think that's what's going on in the short term just because there's so much experimentation going on. The cost of processing and server capacity and energy is so high, it may seem as these efforts aren't getting the outcome they want. But it's also a question of you do want to learn through this experimentation and learning through the use cases that make most sense for you. So there's a couple of things that are going to happen. Number one, I think value being delivered through, especially some of the use cases we talked about, I think it's going to exceedingly become very accretive. And then the cost of developing just the AI infrastructure as well as small language that are very specific to institutions and use cases, I think it's going to continue to go down and through that combination, we've already seen some of that in the last few months, longer term, this significant significant value to be derived out of this. This is not a passing fad, this is pretty structural to the way we do business. And you're seeing both institutions and capital markets and valuation of firms changing differently based on that.

Penny Crosman (17:54):

So going back to your prediction that within the next two years, generative AI uses will follow the 80 20 rule where 20% of the use cases provide 80% of the value. Can banks beat that and get that ratio much more favorable in the sense that more use cases provide more value and what might be some best practices toward that end?

Sid Khosla (18:27):

Sure. And look, like I said, the 80% of the value may come through 20% of the use cases through ai, but there's still going to be a lot of leverage and complimentary leverage of other technologies like we talked about. So I just wanted to make sure I frame that that way. Look, there's two or three. There is two or three things that are important here. Number one, knowing, and this is a big transformational, time is always a good time to reinforce on what your competitive advantage in the market is as an institution. Technology enable that advantage. They don't necessarily only create the advantage or add clarity to that. So I'd say number one is the firms we're seeing be very successful of this are very clear about what their competitive advantage is in the market and then back that advantage and point the technology in pursuit of that.

(19:25):

So that's number one. Number two I'd say is we're increasingly seeing a very business and a business user driven approach to especially agentic ai, the democratization of creation of agents, kind of putting the technology in the hands of business users. It has a couple of benefits there. Number one is these are the folks that are closest to the day-to-day operation. So the translation of what needs to happen into a technology into an agent is actually pretty quick. And then second, from an organization change and acceptance and buying standpoint, it's definitely a shorter path there. So there's something to be said about the democratization of the technology and putting it in the hands of business users so they can create value as best defined that, and then leverage what's being created, whether that's agents or even a multi-agent scenario to bring back in an overall governance and reuse framework that helps the rest of the firm.

(20:43):

And then the last thing I'd say is I think rapid learning is, and in this Darwinian journey over the next few months and years, it'll be clear where you want to double down in terms of use cases and where you want to fail fast. And a part of the learning here is going to be to fail fast really quickly in use cases that just aren't the right ones for ai. And firms that are able to make those decisions quickly are going to be seeing the success as opposed to firms that may linger on too long in a seller use cases that may not be there where the best use uses.

Penny Crosman (21:25):

Alright, that makes sense. Well, Sid Khosla, thank you so much for joining us today, and to all of you, thank you for listening to the American Baker Podcast. I produced this episode with audio production by WenWyst Jeanmary. Special thanks this week to Sid Khosla. Rate us, review us and subscribe to our content@wwwamericanbaker.com slash subscribe. For American Banker, I'm Penny Crossman, and thanks for listening.