As financial institutions race to modernize their operations, agentic AI is emerging as a catalyst for change—shifting from assistive tools to autonomous digital teammates. In this panel, industry leaders will explore how agentic AI is scaling customer service, sales and operations by blending intelligent automation with human judgment. The discussion will focus on real-world use cases in banking, discuss skills needed to manage LLM-powered workflows, and address the organizational shifts required to unlock value—while maintaining compliance, transparency, and trust.
Transcription:
Bailey Reutzel (00:14)
Hey there. Welcome to the American Banker Agentic AI Summit, where your bank is run by robots and they're kind of killing it. Hello, carbon-based life forms, digital assistants, and whoever accidentally clicked the zoom thinking it was trivia night. Welcome to the most electrifying event in finance since someone first asked, wait, what if we put money on the internet? You are now officially part of the AB Agentic AI Virtual Summit, where artificial intelligence doesn't just answer questions, it makes decisions, it files reports, it approves loans, closes tabs you forgot about, tells your cat to politely get off your keyboard. Gone are the days of those boring bots and beige banking. Agentic AI is here with opinions, goals of its own, a fully optimized quarterly plan, and the swagger of a FinTech startup wearing a hoodie at a gala. Today, we'll explore the world where your underwriting software might unionize.
(01:12)
Your customer service rep is a sentient spreadsheet, and your fraud detection system has a sixth sense for sketchy vibes. Expect brilliant speakers, wild forecasts, and at least one use of the phrase "sentient mortgage assistant." It's going to be bold, it's going to be weird. By the end of it, you might just want to hire an algorithm to be your life coach. So mute your notifications, silence your inner skeptic, and prepare to experience the brave, brainy, slightly bizarre future of banking and agentic AI. That's a pretty good intro, right? Well, ChatGPT wrote it. And I'm not telling you this to confess my sin of sloth. No, no. I'm telling you this because I'm trying to highlight how good AI has gotten, but I know most of you here already know this. You're playing hooky from work right now, listening to this and having ChatGPT write your reports and emails.
(02:05)
I see you. I know you're out there, but Gen AI, that's Gen AI. And ladies and gentlemen, that is child's play. We are here to talk about agentic AI and what it can do in banking and lending. To kick off this summit, we've got a panel discussion on the use cases and benefits banks and lenders can count on by adopting agentic AI. This is "From Copilots to Digital Workforce," and my esteemed panelists today are Jim Collins, Managing Director of Financial Services Industry Advisor at Salesforce. We have Andrew McKishnie, he's the VP of Engineering at Multimodal. We have Pedro Uria-Recio, Chief Data and AI Officer at CIMB Bank, and Sumeet Chabria, Founder and CEO of ThoughtLinks. Alright, so let's get into it. Let's talk about some of the benefits of deploying and adopting AI. We start with operational productivity gains, there's cost efficiency, new customer acquisition, revenue growth, risk mitigation. The list goes on and on, all kinds of automation. So I'm going to start with Jim. Which ones have you found most conducive to banks and FIs? Which ones are they thinking the most about there?
Jim Collins (03:22)
Yeah, great. Bailey, thanks and thanks for having me. Really appreciate the opportunity to talk about this exciting topic. At Salesforce, we think about Agentic AI as really how to augment that workforce today, and where we're seeing banks lean in. We actually just had a financial services summit in May in New York, and we asked 75 bankers, "If you had to create a digital labor workforce, where would you start?" Surprisingly, maybe not to some, but to me it was they would start in the operational area, specifically fraud mitigation. Fraud is a big issue right now in banks because, guess what? The fraudsters are using AI, so how do you combat that with AI? So it's really truly indicative of what is to come and what banks are focusing their attention on. So operations definitely, creating not only efficiencies but fraud mitigation to protect fraud losses, is exactly where we see our clients starting from an agentic perspective.
Bailey Reutzel (04:23)
Yeah, interesting. And Pedro, I'm going to throw it to you. Your work at CIMB Bank, where are you guys trying to implement AI?
Pedro Uria-Recio (04:33)
Well, CIMB Bank, just to start as an introduction, CIMB Bank is an Asian bank. We are based in all the countries of Southeast Asia, 28 million customers, 200 billion in assets, and we are the fifth largest Southeast Asian bank. There are many areas where banks in Asia are exploring generative AI, but I would mention four areas within the business units, like consumer, commercial, wholesale banking. The first one is customer service, everything related to customer service, whether you have agents helping human customer service agents, or you have agents that are going direct to the customer. Second area, and I would reiterate what Jim has said, is everything related to relationship managers. So sales. You can have tools that are helping relationship managers to be more efficient, to know what to recommend, to do the research that they have to do prior to a customer conversation. There are many, many ways of helping relationship managers.
(05:37)
Then I will say the next thing is going to be everything related to underwriting, the whole process, risk models, everything related to that. And then finally, within the business units, there are a number of new services that can be provided to customers and were not possible before. So there are many Asian banks in China, for example, that are providing services like advisory, very advanced advisory to customers, AI CFOs for SMEs and these kind of things. Now when you go to the business enablers that are supporting the business units, there are also a lot of things that we are exploring and all the other banks in the region are exploring. I would say risk and fraud is a big area. Having personalized agents for investigators to analyze fraud, to create all those analyses to detect fraud. Second area here I will say is everything related to compliance and everything related to audit. There are many things in audit that could be automated end-to-end where you wouldn't need a human. You would need a human to supervise that everything is going correctly, but you wouldn't need a human to operate all those audits. And then we have two other things, which is going to be finance and procurement. Those are the bigger areas where I'm seeing most banks in the region, in Asia, working on AI. And I would assume that in the US it would be something very, very similar to this.
Bailey Reutzel (07:08)
Yeah, that's interesting. We're going to dig into all of those topics a little bit more, but I want to give everybody a chance to answer this from their own perspective. So Sumeet, advising some of the banks in the US, I think predominantly, what are they looking at, and are you advising them to look other places sometimes?
Sumeet Chabria (07:29)
Yeah, I think I run a company that does, I would say we are focused on the top 10 out of the 20 banks in terms of strategic capacity advising on this. So one of the key places where I come in with the team is, and I sort of agree with everything Pedro and Jim have said, but actually I've taken a more enterprise view on this to say, rather than look for opportunities that are very specific, so customer servicing, call centers, fraud, can you look at a process more end-to-end? So wealth management as a business model in itself. Wealth advisors do a lot of work that it's not client facing, it's just KYC and all of that. So can you look at certain businesses more end-to-end and come up with a strategy that says what will a human do and what a Gen AI engine do, and where agentic AI could play a role, but look at the process a bit more end-to-end.
(08:25)
Now that's hard to do because this thing changes so quickly. Whatever studies that we all did three months ago, three months later, banks have moved past that. So the question is how do you create agility and a methodology around evaluating what's available and adopting and integrating it? So I agree with all the categories. I think operations that Jim said is a huge opportunity because tech and ops, actually technology and operations teams are 40 to 60% of the bank headcounts when you add vendors and third parties and servicing companies and the India organization and stuff. And banks still, even after a lot of automation, which has been great over the years, still have a lot of manual interventions in every process. So operations is huge. Technology itself is huge because the development teams are very large in these banks. They're like 10, 20, 25% of the headcount of the bank.
(09:14)
And a lot of this development could be done through AI agents over time, but also customer servicing, customer engagement, virtual financial assistance, that's a whole category itself. And just generally automation without even facing a customer or a specific team. Can you automate a lot of the stuff that happens in the background? I mean, why should things get fully checked? Why do we have incidents? Can something be running 24x7 checking for things that are happening? Can agents go through the logs now? Can AI agents go through logs and make sure there are no errors that are not reviewed at any severity? So AI could be another line of defense in the company. You've got three lines of defenses, you've got the front lines, you've got audit and compliance, but AI could be a line of defense. And so that's some of the thinking that we bring in my discussions.
Bailey Reutzel (10:04)
Yeah, super interesting. Andrew, I'm going to pass it to you to answer the same question. Where are you seeing financial institutions want to implement agentic AI?
Andrew McKishnie (10:15)
Yeah, well, I think that they want to become fully AI ready, and there's sometimes we have an expectation that that's going to happen on day one. And I think that it's really easy to fall kind of prey to some of maybe the hype around AI, where I mean, AI is extremely effective. And I mean, compared to even nine months ago maybe, I think the latest release of models has been just a huge leap forward, but we're still not at the point where you're just going to completely offload work to AI. So I think just to echo a point that Jim made, that Multimodal, we have a similar philosophy that we want to augment humans and let them work more efficiently. And so, yeah, I think the other three guys here have taken a bit of a wider view. So maybe I'll kind of get a little bit more into the nitty-gritty.
(11:11)
So a few of the areas that we work in at Multimodal. So we provide solutions around different workflows within banking. So let me talk about, say, loan origination as just an example. So we look at in loan origination, you have four major components that can be offloaded to the AI. So the first one is your document triage. So what that involves, it's accepting, ingesting your documents, classifying those documents, and extracting the information out of them. What that allows you to then do is you can do triage. Is this time sensitive? Does this need to be routed to time-sensitive team A? Do we have suspicion of fraud? Should this be routed to the fraud detection team? Things like that. And then you can automate processes where, say if the application's incomplete, maybe they're missing a bank statement, you can automate notifying the client that gets returned.
(12:09)
So before your loan officer has even looked at the file, we've verified that it's complete. That's sort of the first leg. The second leg is around diligence. So here you need to be able to do some more digging into this file. And the way that we operationalize AI around that is through AI agents that operate in a chat-type interface, but they're powered by your data underneath. So that's any internal guidelines, external compliance and regulation documents, things like that. The third leg then is it's time to make decisions. So after you've ingested all your documents, it's complete, you've done your compliance, everything looks good, then you have maybe some calculations that need to be made. Maybe different types of applicants will have to go through slightly different algorithms. So the AI is able to do that orchestration, and then you arrive at a decision. And that decision can be everything looks good, you should approve it.
(13:22)
It might mean, hey, they look like they are a little bit low in their reserves, maybe do a little bit more manual investigation there, or it could be, hey, this is just a garbage application, reject outright. And then finally, everything that has been done needs to be papered and put together in a single report. Because I think one of the things that does block AI agent adoption, particularly in regulated industries like banking and insurance, is the fact that AI can be somewhat of a black box, and we're not always sure why it has made the decisions that it has made. So I think that papering a final report with all the actions the system of agents has taken, as well as reasoning as to why it's taken those actions, that's where we find that we're able to help financial institutions a lot.
Bailey Reutzel (14:18)
Yeah, I think the AI explainability thing is pretty interesting. So let's dive into that a little bit. I am going to add a poll. So for our audience who's listening in here, you can use the chat box at any time. I'm looking at that and seeing what people are saying. I'm also going to add this poll to see which use case for agentic AI you all are most interested in. So please fill that out and then we can talk about that specifically. Yeah, we'll get that going. But in terms of the AI explainability, let's see. Pedro, I'm going to pass it over to you. I think, like Andrew said, in this regulated industry, we're going to have to know how the AI made certain decisions. And I guess if you have been working with some of these AI use cases, are you able to determine what that is? Where is the line drawn if you can't understand how the AI made that decision?
Pedro Uria-Recio (15:14)
Well, I mean, explainability is particularly important in credit decisions, and that's a requirement by many regulators around the world. Maybe not all of them, but by many regulators. And that's the reason why in credit scoring, sometimes the models that are used are not deep learning. They're not complicated models, they're more explainable models. But that doesn't mean that you cannot use generative AI to automate the underwriting process. All the documentation that is done through that underwriting process, the disbursement, the follow up of the loans, all that can be implemented with Gen AI. There are no particular needs for explainability because it's a very straightforward process. But that area that requires explainability, which is the credit, the decision engine, that is something that probably you want to be more careful about using Gen AI or you want to use models perhaps more simple, but you can explain, right? That's the main area where you need explainability. There are other areas in the banks where the explainability is not such a big factor. We are talking about customer service, is not such a big factor. We are talking about sales relationship managers, wealth advisors who are also relationship managers, is not necessarily such a big factor. It's a big factor in credit scoring.
Bailey Reutzel (16:36)
Yeah, I think the customer service thing is interesting to say that you don't need explainability only because
Pedro Uria-Recio (16:43)
You might need auditability. You might need auditability in customer service, right?
Bailey Reutzel (16:46)
Yeah, yeah, fair enough.
Pedro Uria-Recio (16:47)
Fair enough. Which is a different factor. Some of the regulators, for example, the regulator I always have in mind is the Monetary Authority of Singapore, which is the one that is kind of leading the region in Southeast Asia. So the Monetary Authority of Singapore and many of others probably in Southeast Asia that have followed after the example of Singapore have a standardized number of principles like fairness, ethics, accountability, trust, but explainability is not necessarily one of these main principles. It is important in credit scoring, but the other principles like accountability, trust are complimentary to this. So there are different principles that banks have to comply with in different areas, and the other principles are most important.
Bailey Reutzel (17:44)
Jim, I'll pass it to you. Andrew had mentioned there are different models, and Pedro also mentioned different models, right? There's the big open models like ChatGPT, but then there are companies who are building private models just for the bank data, for instance. I guess, Jim, I'm asking how are you seeing banks think about that? What architecture, what model, what system they use?
Jim Collins (18:10)
Yeah, Bailey, great question. And to piggyback off of Pedro's comments, our Salesforce platform when it comes to Agentic AI, we have that trust layer built in. We understand from a regulated industry standpoint what's needed, whether it's health and life sciences or financial services, and we don't focus on explainability per se, but we focus in on traceability and auditability, understanding what the regulators will be looking for. And it's interesting, I heard the other day they said, "Well, soon the regulator will be an agent too." So it'll be interesting to see how that plays out. But when it comes to models, Bailey, our philosophy is it's BYOM, bring your own model. Because what we are going to do from an agentic standpoint and an AI standpoint is ground our AI in your data, whatever data that is, whether it's structured data, unstructured data, et cetera. And when it comes to compliance, it's interesting because a lot of people say, "Well, AI, there could be some compliance issues."
(19:06)
Well, it's grounded in your data, and if an agent is giving a customer a response or an answer and it's incorrect, we just help surface an issue potentially with your data. And if you think about the answers that agents give, Agent AI agents give, they're going to be more compliant than the six or seven of us on this call today because we can all give different answers. A lot of times you have compliance issues because of human error. This actually eliminates human error. AI can make you more compliant versus not being compliant. So I just wanted to share that thought, Bailey.
Bailey Reutzel (19:40)
Yeah, honestly, I think that's a really interesting, we talked about this before, and it's just a very compelling statement that you're making. I think it's hard maybe for some people to see that only because they're probably using Gen AI like ChatGPT more regularly than a private agentic AI system. And those larger models hallucinate, so they do give you wrong information. So I think that's where some of this comes in for people where they're like, "Oh, I think agentic AI is not going to have the right answers for me." So yeah, Sumeet, I'll pass it to you to answer that similar question. How are you seeing banks and financial institutions think about what models and systems to use?
Sumeet Chabria (20:23)
Yeah, I think these are great answers. I would say that it's part of the ecosystem of how you implement and what you implement, and where does a human in the loop come into this? I think for sure, auditability, traceability is important. I would say that models today, it's not binary. It's not that they're completely unexplainable, they're not fully explainable. They can't tell you the inner workings inside the model. But you could still, to Jim's point and Pedro's point, you could still figure out what data was used in that decisioning. You could still figure out what some of the information and decisioning criteria had used. There's enough information can be given out of the model to kind of backtest to see if it's doing what you're supposed to be doing, but the error rates are still there. You can still have a human compensate for the error and verify and trust and verify.
(21:12)
You can have two models work together, one model checking the output of another model. So there's a lot of implementation and how it's embedded in that comes in. The other thing is most clients that I've worked with have a consequence framework. So if the model is basically giving you a slightly incorrect answer, fraud is a good example. Let's say there's false positives in fraud, you catch more fraud significantly, but a few of the flawed alerts are incorrect. A human can look at that and sort of bypass it. You still caught a lot more fraud. And that was the point that the Bank Policy Institute, I think, mentioned in their latest paper on why fraud has improved 50% in banking already through AI. So I think it's about how you use it, where you use it, what part of the process it integrates with, and what the controls that are embedded in it to ensure that it's working properly. But there's always going to be an element of risk unsolved for, and then you have to make a call whether it's ready primetime for that use case or not. To your point, I would not use it in, I mean, credit underwriting, lending, if you're approving somebody's mortgage application, mortgage decisioning. There's a lot of things, but there's a lot of processes that lead up to that final decision where AI could be used.
Bailey Reutzel (22:28)
Yeah, for sure. Pedro, did you want to jump in there? I thought I saw you kind of twitch around. You wanted to say something?
Pedro Uria-Recio (22:35)
No, no, no, no. I'm fine. Fine.
Bailey Reutzel (22:37)
Okay. Yes. Great. So I think we want to talk about a real-world use case, something that's already happening. I have a question here
current biggest business and customer impact use cases that are currently happening. So not just in a POC, like on the ground. So Jim, I'll pass it over to you first and then Andrew, and then Pedro, and then Sumeet. So get ready for your answers.
Jim Collins (23:05)
Bailey, I'll just start with how we're using it at Salesforce. Yeah, great.
(23:08)
And so before we rolled it out to our customer base across all industries, we basically rolled it out to ourselves. So if you go to help.salesforce.com, Agentic AI is actually managing that process. And since we rolled it out in December, 85% of our cases or questions that have come through that channel have been answered by an agentic AI agent. So we've deflected 85%, which has now freed up capacity. Think about capacity. Freed up capacity of our team members that were manning that help channel to do other things, to have deeper client engagement, deal with more complex issues, really focus in on servicing our customers to the best of their capability because the mundane questions, the routine questions, have been taken care of by an agentic AI agent. So you think about the combination, and Andrew mentioned this earlier, it's about augmentation, right? It's not about replacing, it's about humans leveraging agents in order to be more productive, to free up capacity, and to also free up speed to resolution. So that's one of the best cases I've seen.
Bailey Reutzel (24:18)
Yeah, that's interesting. When you say the agent, AI agent answered that, did they answer it where the customers were happy with the answer?
Jim Collins (24:27)
They did. No, they did. And if they didn't, then they would hand off to a human. Once again, a human in the loop is there to deflect any type of escalation necessary, but 85% of the answers are being answered by the agent satisfactorily.
Bailey Reutzel (24:41)
Yes. Great. Okay. Yeah, we were talking about this as well. It's just like if I call into customer service, it's because I have an exception. I'm not calling in to ask for the hours of the bank branch. I'm just going to look that up online. So my exceptions with today's customer service do not get handled by the automation, which it's not AI. I'm not trying to say that that's AI, it's just like a customer service digital robot. I don't know. There's probably a better term for that. But yeah, so I have not had great experiences with customer service agents who are not humans. And so I guess that is one of the use cases that we keep talking about is, "Oh, how agentic AI can help customer service," and I remain a bit skeptical, I guess. Andrew, do you want to jump in there? Yeah,
Andrew McKishnie (25:30)
Yeah, for sure. So yeah, so I think that one of the biggest areas that we see in production today is with that sort of first piece that I mentioned, the document triage and information extraction. I think that a way to think about your overall AI journey from an enterprise perspective is to really think of it like your onboarding a junior employee. And so on your day one, you're not going to have this junior employee making complex loan decisions, but you might have them looking at a bank statement and saying who the name of this is and entering that into a database. So I think that things like extraction, and as well as chat-based agents around compliance and things like that, they have the lowest lift to start getting production level results versus some of the more complex tasks. They take a lot more time to develop and things like that.
(26:36)
And I think I was mentioning earlier, there's a lot of excitement around AI. There's a lot of like, "Hey, let's get using it." And then I think sometimes companies will think they're going to completely revolutionize everything with AI. And then when they realize, they kind of come with this plan and they're like, "Hey, we want to do all these things," and we're like, "Great, that's going to be 24 months of work. This is a lot." And it's kind of like, "Whoa, why is it going to be so long?" So I think that finding more bite-sized chunks where you can inject some AI, create some efficiency, augment those humans that are already doing that from a perspective of a vendor as well, that really allows you to buy some trust. "Hey, we've built this agent, it's doing what it's supposed to do. Now you can trust us to move on to some of these more complex tasks and things." And that buys some goodwill with the team. Even if it is an internal build, you still have to negotiate with the business units and the team members to actually start using this, right? And if you can give them something that's really effective right away, it buys you some time to work on those systems that are much more complex and take a lot more work to get and calibration to be working properly.
Bailey Reutzel (27:48)
Yeah. When you say find these chunks, these maybe simpler, not so complex chunks, how would a banker or financial institution go about finding those chunks? Does that make sense what I'm asking?
Andrew McKishnie (28:01)
Yeah, no, and again, I think it just kind of ties back to you think of it in terms of how senior of an employee would I need to do something like this or how experienced of a person would I need for this task? And I think those tasks like data entry, looking up information, these things that just require a lower degree of skill to be able to do those tasks, that's where you should offload to AI first. And that allows the team developing the AI as well to really get to understand your business and your processes and things like that, which can really aid in developing those more complex systems.
Jim Collins (28:47)
Bailey, if I could just jump in on top of that, Andrew, and he's right on target with this. The way we're thinking about it is in these discussions during strategic planning season about your workforce, how do you do the work? Who does the work? And what are the jobs to be done and who can do those jobs? And once you're able to define that down to the minutiae level, you're able to execute better from a combination of human and AI related.
Bailey Reutzel (29:10)
Yeah, that's interesting. Pedro, I'm going to pass it to you. You had mentioned also that Asian banks are sort of aggressively approaching agentic AI, maybe more so than western banks or banks in the US. So what use cases have you seen already work for your bank?
Pedro Uria-Recio (29:29)
I mean, there are all these collection of simpler use cases that Andrew was referring to, and yes, I do think that those use cases, which I call personal productivity, are very important because that is what makes people familiar with the use of AI at work. I mean, simple things like the copilot in the organization to handle your emails, to do document summarization. There are so many things in a bank that require document summarization. There are so many things in a bank that require getting a document and extracting something from the document and you can do it with a small copilot and then pasting that thing you have extracted somewhere else in an Excel. There are so many things that require this. Of course, this is the beginning, right? But then you have to go into things that are more advanced, and that's where you have tailored companions for specific roles.
(30:26)
Many banks might not want to get into use cases that are direct to the customer to start with because there is a risk of saying something that is not correct to the customer, and you wouldn't want to do that. That's why many banks start with employee use cases. And there are many jobs, many jobs that would benefit quite a lot from having a tailor companion relationship managers, underwriters, fraud investigators, contact center agents. There are so many lawyers, there are so many that will benefit from having a tailored companion that, for example, if you are a relationship manager or a wealth advisor, you have a companion that you can ask before meeting a client, what do we have to recommend to this client based on the information the bank has about this client. It can handle your appointment with the client, it can handle, you need to create a proposal for an investment proposal on the fly. You create an investment proposal on the fly, and that gets sent to the customer in PDF, but it's operated by this person. There are many use cases like that, and this kind of tailored companions is something that we are seeing quite a lot in Asian banks, Australia, Singapore, and of course China.
Bailey Reutzel (31:44)
Yeah. Yeah, that's interesting. If you need an introduction for a virtual summit, you can also write that with generative AI, for instance. You bring up a point that maybe I should have touched on at the beginning, but just the difference between sort of copilots, Gen AI, and agentic AI. I am sure that a lot of our audience sort of knows the difference, but Pedro, you just go over that. I think the definitions aren't super.
Pedro Uria-Recio (32:08)
Yeah, I think for me there are five levels, and this is the way I always explain it. The lowest level, the easiest thing you can do is personal productivity, copilot or others. I mean, this is an LLM for general tasks. You go to the next level, and this is tailored companions for specific jobs, right? Specific jobs that are quite massive. They have special things to do. They need something tailored relationship managers, investigators, and so on. Then you go to the next level, and the next level for me is autonomous agents. So these autonomous agents are no longer operated by a human. They're autonomous, they're supervised by a human, but they can run on their own. So I could imagine things like there are many marketing campaigns that can be autonomous. I could imagine a bank in the near-term future where most of the marketing campaigns are autonomous. There are many things in audit, in compliance that could be done autonomously.
(33:05)
So that's level three. Then you go to level four, and level four is when you are having agents that are autonomous, of course supervised by a human, but now they're talking direct to a customer. So you could have voice bots. It's very common to have voice bots for collections that call the customer and say, "Hey, you didn't pay your credit card." "Well, I'm sorry, okay, please remember. Thank you." That's becoming very, very common. And you can have more complicated bots or voice bots that are direct to customer. Of course, banks might not want to deploy this first, so they may want to deploy the previous cases because those are more focused on the employee. And then you go to the next level, level number five, which is just I would say the next stage of technology, and this is when you have teams of agents.
(33:53)
So you have autonomous agents that engage with other agents, and together they automate a whole process. So you could think about underwriting. So you have an agent that is running more or less autonomously that is getting the documents from the customers and verifying those documents. And then it passes to another agent that based on those documents is calling a more traditional credit scoring model. It has to be explainable and is scoring the clients. And then it goes to the next one that is maybe identifying those that we are not so sure, and those go to a human. And then so you have a team of agents, and it could be a team of agents and humans that are working together. So that is level five. So if I repeat, it's number one, personal productivity. Number two, tailored companions. Number three, autonomous agents. Number four, direct to customer. They're also autonomous. And number five is teams of agents. That's how I see it.
Bailey Reutzel (34:44)
Yeah, interesting. Okay. Yeah, Sumeet, I'm going to pass it to you if you want to comment on that because again, I think that the definitions, they're not so solid because the technology continues to develop. So Pedro, I love this explanation, but I wonder if everybody would have that same explanation. And then Sumeet also, what are you seeing in terms of real-world use cases that are happening right now?
Sumeet Chabria (35:09)
Yeah, look, I agree with Pedro's definition. I do think that the definition over time is going to change a bit as the technology matures. But broadly, the shift has now happened in the last few months, significantly, at least with the US banks, from going from Gen AI engines to some level of agent AI implementations, whether they have semi-autonomous, fully autonomous, embedded with oversight or not. I still think there's a huge set of opportunities for agents that are not necessarily augmenting human capability or capacity that in the background do tasks that generally humans don't do as well or don't do checking to make sure there's been no network intrusions, scanning the entire network in the background. No human being can scan every node every second all day long. So there's some work to be done. Ultimately they have to feed into some process that a human being has a level of oversight.
(36:09)
But I'm seeing very broad-based implementations now across the board, early stages still. So Markets Operations has taken the lead. I think generally beyond commercial and consumer setup of information and data in all the systems. I mean, Markets Operations does get a lot of audit issues and RAs for data setup. Trading businesses are very high-volume and high-value businesses. So standard settlement instructions that we set up, counterparty information has to be set up. All that has to be validated. Agent AI implementations have started to do that level of validation. A lot of documents have to be created like ISDA documents for derivatives and stuff. Agent AI can help on that and is being implemented. There's a lot of reconciliations that happen both on the finance side and on the operation side. In banking, especially in markets. Rec agents are coming through right now that are helping you just make sure everything is getting reconciled properly because otherwise there are breaks that flow through the ledgers and cause again, audit issues. On the front office side, early versions of algorithmic quantitative trading models are being implemented with agentic AI.
(37:19)
They don't replace the trader, but think about price actions for traders. You've got a pricing there and suddenly you've got a small act that comes in for the trader recommendation, given market news and everything else. I mentioned early on, wealth banking, wealth advisors, a significant amount of work to make sure the client snapshots get given to the wealth advisors, so they have a full picture. Agents also helping screen some investments for suitability. So like New York State, I think yesterday issued a very big bond. How do you know as a wealth advisor that whether Jim or Pedro are suitable clients for that, given their profile? Agents can run that sort of check and give the advisor like a dashboard to say, "Here are the 10 clients you should call today based on that." I'm sure that with platforms like Salesforce and others, you can just boost that up even more, but that's just an example.
(38:13)
Same with research assistance. I'm seeing a lot of work being done on the equity research side where a lot of analysts that are doing so much work discovering of information on companies' profiles and all of that, agents can pull it all together, and then analysts can just focus their time on writing reports. Technology development, I don't know. In the last week itself, I've read four banks, three or four banks announced agentic AI implementations to speed up development in different capacities. So I think we had discussed where in the last two months or three months, it's gone from few use cases including fraud to a broad-based in every line of business. Somebody's thinking about agentic AI now.
Bailey Reutzel (38:53)
And that's speeding up development. Is that sort of this AI coding, coding kind of thing?
Sumeet Chabria (38:59)
Vibe coding, but it's also documentation. It's also being able to look at reverse engineering, some code conversion between one platform to the other, but documentation is a big part of it as well. And I think just overall, just getting more productivity from tech dollars. I mean, technology budgets are massive. There's not enough. Usually there's more demand than supply. So how do you make sure that you are, if you have 5,000 developers or 20 or 50, and I think somebody else made the point, which I think is very valid, I forgot who it was. Compliance checks is a big deal. If you're in a big bank writing code, you've got to comply with policies and procedures, laws, rules and regulations, and there could be about a hundred different documents you're supposed to read to make sure you're AML compliant, KYC compliant, compliant with the American Disabilities Act, your font is 12.0 on the mobile phone.
(39:53)
There's lots of these things, laws and regulations. And so who's checking to make sure? Usually there are audits and checks that happen within the process and outside the process. The question is, can an agent go through that and make sure that they've done the first run for you and say you're compliant and not compliant? Or if you started a project and did not get cyber security teams in the loop early on, it gets flagged as a risk. So there's a lot of that automated agent work that can happen, both complimenting a human, but also I think that can run in the background in some use cases where it doesn't displace a human being, but it just adds a level of intelligence and control.
Bailey Reutzel (40:31)
Yeah, that's super interesting. I was just looking at the poll results and 63% of our attendees said that the use case they're most interested in is compliance and fraud prevention. So that I had not thought of compliance, I think with that use case in mind. But that's super interesting. And Andrew, I want to pass it to you. On the engineering side of things, do you see engineers using agentic AI for that process? For being like, "I want to build this application, but I don't know if it's compliant or the data that I'm using is compliant," et cetera. And so the agentic AI is going around checking on compliance.
Andrew McKishnie (41:14)
I don't think that that exact sort of flow is exactly how we're using it, at least at Multimodal. I think that a lot of it is more so, I guess it's not necessarily, maybe I'm getting caught up on writing specifically compliant code. I think the compliance really, it has much more to do with the data and things like that. That's fair. Yeah, yeah, no, so we definitely use models for checking for compliance. So a lot of the time what that has to do is say we will create for a specific use case, we'll create what we call them spaces. And essentially what that is is all the compliance and regulatory documentation around that. Like Sumeet said, I think it was Sumeet said, humans, we might say that we're going to read these 100, 250 page documents, but no one's reading every single word there. And if they are, I have serious questions about what their personal life looks like, but AI agent can do this, right?
Pedro Uria-Recio (42:27)
Finding the documents, I mean sometimes those
Andrew McKishnie (42:30)
Exactly.
Pedro Uria-Recio (42:32)
You don't even know it is written somewhere. You don't even know how to find it.
Andrew McKishnie (42:36)
Exactly, exactly. So a key thing that we build into all of our agents, and this is having inline citations, because a lot of the time you might ask a chatbot a question and the answer, you're like, "I know that's right, but it's just not complete. I need more information." So you're able to then it's going to identify, "Hey, I took this information from page 118 of document X," and then you, like Pedro said, you're just that much quicker. You say, "Okay, I've asked this question, that answer, I need a little more info, but I know exactly where I need to go to find it." Really reduces a lot of friction in the compliance process.
Bailey Reutzel (43:18)
Yeah, that's super interesting. Pedro, do you want to add to that in terms of where?
Pedro Uria-Recio (43:23)
The other thing that is very interesting about fraud and compliance is that fraud identification is one use case. One of those use cases where you can use generative AI, of course, but you have to use it in coordination with traditional analytics. A lot of the filters to identify fraud are, and I believe they will continue to be driven by traditional IT for a while. Then you have to create network graphs of different entities that might have collaborated in a case that you are investigating as possible fraud. So they have transactions in common, and those patterns of transactions is a network graph, which is traditional analytics. And then you might have a Gen AI engine that takes the output from these more traditional analytical models, takes the output from the network graph and other things, and puts together a case and writes the case. And based on that case, we decide this is a case of fraud or this is not a case of fraud, or we have to escalate it to a human to continue the investigation. So that case, yes, I think is something that many banks are looking at, and the beauty, but also the complexity of it is that it requires integrating many, multiple technologies. Some of them that banks are familiar with, others maybe not. So you have multiple generations of technology that can be a challenge as well.
Bailey Reutzel (44:58)
Yeah, of course. Okay. I'm going to shift gears a little bit. Someone had a really good question, sort of future of work question, and I definitely want to get to you all's big ideas about the future of work. The question specifically though was, "By targeting low-level tasks," so these smaller chunks, maybe what a junior banker would do, "How do we ensure that our human workforce still has access to the foundational experiences for skill development and career progression? Where do the mid and or senior tier people come in?" Yeah, let's see. Who am I going to pick on? Jim?
Jim Collins (45:36)
Yeah, I'll start. So Bailey, we actually looked at the stats about the routine tasks, and I think our total is up to about 41% on average of a banker's tasks are routine and mundane. And that's what we're really trying to move out into the agentic layer. And when you think about skilling and skill levels, it's actually an opportunity for these junior bankers to skill up and learn to manage the process because let's face it, you need to learn to manage AI for the future. It's not that AI is replacing you, but you have to be able to manage it and direct it and guide it to get to the right answer. So we see it more as managing up, skilling up, and taking away those mundane tasks in order to be able to be more productive and more expansive from a career perspective.
Bailey Reutzel (46:24)
Yeah. Andrew, jump in here.
Andrew McKishnie (46:27)
Yeah,
(46:28)
I just kind of want to build on what Jim was saying there, actually. And I think that that learning to manage AI is going to be huge, and that probably is much more valuable skill in the future. Because another thing that we have to keep in mind, too, is that the models we're talking about here right now, but models have certain limitations. They're good at these tasks, maybe struggle with that task, but you have to forecast that in five, 10 years time the models are going to be inconceivably better than they already are now. So what that means is that they're going to be able to push higher and higher into that. If we think about sort of a hierarchy of task complexity or something like that, the models are going to be able to push higher and higher, not, and there is a world eventually where basically all the tasks will be handled by AI, but you always need a human to manage that AI and oversee it.
(47:27)
So I think that was an excellent point that Jim made about that it's an opportunity for these junior employees to maybe upskill or even maybe, I don't know, shift skill, maybe more of a lateral move maybe in terms of, I don't know, level of task or something. But I think that looking forward to the future, you're going to see this be a requirement for positions, like this experience managing an agentic AI system. Even if that doesn't necessarily mean the technical skills to be able to write the code. For one, it means it's going to mean knowing how to prompt it correctly, knowing how to troubleshoot it when it's not working correctly, or understanding what its outputs mean. Because there are going to be certain patterns, particularly around failure, so to be able to analyze these things. So I think it's not going to be so much that we're eliminating these junior positions versus just sort of shifting their focus a little bit.
Bailey Reutzel (48:32)
And Pedro, I want to pass it to you. I know that you have some thoughts here. Just in the green room we were sort of talking about what are the kids going to do? So yeah, go ahead. Yeah,
Pedro Uria-Recio (48:41)
So I totally agree with what Jim and Andrew said about the short term. I think in the short term, AI is going to create much more jobs than it will destroy. This is what has happened with any other technology revolution, the industrial revolution, the IT revolution. And even if you think about the lytic, if you want to think it that way, I think the question is, will that hold forever? And the answer is, I don't know if that will hold forever. I don't know if that will hold forever because of two reasons. There are two factors that did not happen in the previous technology revolutions. One of them is that the pace of change is so fast that some people cannot go that fast. So you could change jobs, I don't know, maybe people that are watching these podcasts, they can change jobs every three years or every four years.
(49:43)
They can change a profession every number of years, but not everybody's able to do that. And if that continues to accelerate, a lot of people might not be able to do that. So that's one factor, right? The other factor is the evolution of technology. So AI is becoming more and more intelligent. If we reach artificial general intelligence, by definition, that's a substitute by definition. Are we going to reach artificial general intelligence? Well, I mean, if you listen to the Silicon Valley, they say that, yeah, we will reach it in a couple of years. They were saying the same thing in the sixties, and that was more than 50 years ago and it didn't happen in the sixties. So I guess at some point it might happen, right? And that's going to completely transform the way we work.
(50:32)
And it's very difficult to foresee at this moment how those changes will be. Actually, I am very interested in this topic is one of the areas that I cover in my book, which is what you have here behind, "How AI will shape our future." I think the future of work might be utopian for those that adapt and it might be dystopian for others. Now the question is how do we get closer to the utopian side? But that is not an easy subject. That is not easy to know. No. So that's why when children ask, "Hey, what should I study?" That's a very difficult question to answer. What should I study? Should I be a coder? Should I study engineering? Should I study? It's a very difficult question to answer. What I can tell you following up on that is what China is doing.
(51:28)
Well, I mean, China is famous for industries for manufacturing. Now they are realizing that manufacturing can be automated completely. They start having factories that have no humans. They call them dark factories because you don't need to switch on the light. And now they're putting everybody in sales. In China, they're putting every companies like Huawei, they're putting everybody in sales, they're selling their systems, financial systems, IT systems, systems for energy. They're selling their systems in Latin America. They're selling the systems in Africa. They're selling the systems in Eastern Europe. China is putting everybody in sales. Is that the strategy western companies should take? Well, I mean it is not what they're doing at this moment, but we have to find a way. We have to find a way. What China is doing makes a lot of sense for them and it makes them more competitive. So it's a very difficult question to answer, and I think we have to work together in finding the right answer for our children, right?
Bailey Reutzel (52:35)
Yeah, for sure. And I think some of these conversations are not happening as much as I would like to see them happen. I will say on the topic of what should I do with my life? I have a career and I still ask that question, and it was like, "Oh, I should learn to code." And now that the vibe coding thing is happening, I'm like, "Oh, I'm glad I didn't join a bootcamp. Pay a lot of money to join a bootcamp because I can code with AI now." So yeah. Sumeet, I want to pass it to you to answer this question too, to just talk to us about what you think the future of work looks like with agentic AI.
Sumeet Chabria (53:06)
Yeah, look, I think some great comments. I think there's a line of, there's a dimension that I think is well covered, which is bankers that work with AI will replace bankers that don't know how to work with AI. So that's sort of the AI knowledge and ability to work and interact is one thing. But beyond that, I think what's going to be clear is that some of the work that we're talking about that's routine, repetitive, all of that is going to be attacked first. So as people working in this industry, they have to replace that skill set with being able to apply more judgment, have more critical thinking, have more intellectual curiosity, because things are going to change, be open to agility and dealing with the unknown. I think if you talk about wealth managers that we talked about, that I'm working with a few of them very specifically, we're targeting if some of this work gets completely automated through agentic AI, the wealth team should be facing more to clients. Communication, networking skills become more important.
(54:08)
So I think good HR departments, at least for the top US banks, I can tell you, are deeply immersed in this discussion to think about what skill sets need to be, where do people need to be trained for the new skills beyond playing with AI and knowing AI. I think that everybody will start to do, but I think certain themes are emerging, and I think that if people adopt and embrace that, I think no matter what the world looks like, they're going to be at least going a step forward. The only other thing I'll add is I don't think that this level of disruption, it'll discriminate only the junior folks. I think very quickly you're going to understand this is disrupting knowledge workers. So I'm testing under NDA a legal tool that I use myself that reads my NDAs, redlines it because I do a lot of NDAs as a business, redlines it, is looking at my criteria, and actually does maybe 80, 90% of my legal NDAs.
(55:04)
For me, it's a very new series, a kind of tool that I'm testing that tells me a little bit about what the world looks like, right? And I'm replacing not just my billing to junior lawyers, I'm getting less billing of senior lawyers now in some cases as well. So I think this doesn't discriminate. So anybody that feels that their jobs are going away because they're junior, in fact, I would actually argue that mid-level bankers in certain businesses are more at risk than junior bankers who work with AI and come on with all this information and knowledge.
Bailey Reutzel (55:36)
For sure. You set up the last question and it has to be quick. I think we're basically over time. So it's just like, how do you use AI in your day to day? This doesn't have to be in work. It's just an example from your day to day. So Sumeet, you've already given us one. Great. And this just gives people an idea of what they could be doing. I don't know, maybe they can take more vacations if they use this AI tool. So Jim, I'm going to pass it to you, 30 seconds.
Jim Collins (56:03)
It definitely is, as Sumeet said, for informational purposes in order to summarize presentations, in order to really create more succinct ways in order to access information and absorb information. And the last thing I'll say, Bailey, because I want to make this point. I know I got 10 seconds left. The biggest impact that we see at Salesforce from Agentic AI is scalability for organizations. We talked about productivity and efficiency, but banks and organizations can scale faster now and be able to compete on a different level.
Bailey Reutzel (56:33)
Yeah, that's a good point. Andrew, I'll pass it to you. What AI do you use in day to day? Yes.
Andrew McKishnie (56:39)
Yeah, for me, it's a lot around writing documentation, whether that's technical documentation, writing reports for clients, emails getting sent to clients. As an engineer, I'm not maybe known for my wordsmith ability, so it's nice for AI to kind of take over there.
Bailey Reutzel (56:58)
If we were all high school students, we would flunk because we have used AI for all of our writing. Pedro, what about you?
Pedro Uria-Recio (57:05)
Me, in personal life, I would say social media. Social media posts and photos that are generated. And at work, and at work is deep research. I do deep research almost every day for anything I want to learn. I do deep research and it is life-changing for a professional, it is life-changing.
Bailey Reutzel (57:33)
Well, there you go. You've heard it. Here. Go out, attendees, go out and start using these tools so you can be the best worker that you could ever be, the best human that you could ever be, augmented with AI. That's all for this session. We will have another session after this. We have two more after this. The next one is a fireside chat. You just have to press the back button and then click into that other session and that'll be in 10 minutes. So we'll see you back here very soon. Thanks so much everyone. Thank you.
Opening Remarks & From Co-Pilots to Digital Workforce: How Agentic AI Is Redefining Work in Banking and Lending
July 29, 2025 12:23 PM
58:05