Opening Remarks & The AI Edge in Banking: Your Roadmap to Becoming a Pacesetter, While Staying Resilient
October 22, 2025 8:00 AM
54:36 Banks are racing to harness the power of AI to gain a competitive edge, slash costs, and boost productivity. However, the path to AI success is fraught with obstacles, from managing risk and attracting top talent to demonstrating tangible value. This discussion, informed by ServiceNow's new Banking AI Maturity Index research report, will provide actionable strategies to help you, your business, and your bank navigate the AI landscape, enhance operational resilience, and thrive in the competitive banking industry. Join Kristin Streett, Head of Banking GTM at ServiceNow, and your financial services peers for insights into what leading banks are focused on to grow their AI footprint and how you can apply them to your own business.
Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.
Holly Sraeel (00:08):
Morning. How is everybody? We had a big day yesterday. I hope everybody enjoyed it. For all of the next honorees that are in the room tonight, congratulations again on your achievements. We were very pleased to honor you last night. This community, which is now 23 years old, would not be possible unless we had the support of companies like ServiceNow. So I would like you to give a warm welcome to Kristin Streett, head of banking, GTM, with ServiceNow, and they're going to take us through an important discussion.
Kristin Streett (00:41):
Awesome. Thank you so much. It's so great to be here. ServiceNow has been sponsoring with American Banker and Arizent for most powerful women in banking for a number of years. And so it's our pleasure to be here today and share some research with you. I also just want to thank my ServiceNow female leadership team that's here in support of me. So I'm just thanking you so much. It's so nice to see you in the crowd. I would like to introduce our very esteemed guest. Jennifer is with NVIDIA. She's going to introduce herself, but also Janna Wagoner from Fifth Third Bank who's going to join us in this discussion around AI. So Jennifer, do you just want to say a little bit about who you are and what you do at NVIDIA?
Jennifer St. John-Foster (01:25):
Sure. Thank you, Kristin. Good morning everyone. Jennifer St. John-Foster. I lead the banking sales team for NVIDIA. I've been at NVIDIA for seven years located here in New York and throughout that entire seven years, I've been working very closely with the largest banks.
Kristin Streett (01:41):
Nice to have you.
Janna Wagoner (01:42):
Hi, I'm Janna Wagoner. Work at Fifth Third in the technology department. I'm on the receiving end of a lot of this technology and all the new stuff that's coming out. So constantly trying to stay up to speed and figure out how we can implement that in our institution.
Kristin Streett (01:58):
Awesome. Okay. So what we have today—ServiceNow sponsors quite a bit of research with Arizent, but we also run our own research initiatives. And the reason that we do that is because it helps us to learn more about our customers, but it also is valuable to our customers to be able to take that information back to their respective institutions and use that data in their evaluation of different strategies. And so what we've put on your chair is a little postcard that gives you access to the banking study that we sponsored with NVIDIA so that you can read the research in detail that we're going to go through here today.
(02:39):
So the Banking AI Maturity Index is actually an excerpt of a broader study that ServiceNow conducted of over roughly 4,500 different companies across 11 different industries. We took out an excerpt for the banking population and a lot of the details that we're going to share with you here today are coming from that study. And one of the most interesting things that we found when we were looking through the data is that AI is moving so fast, but banks are evaluating their ability to keep up at a much lower rate. And so the banking maturity index evaluation for banks actually saw a decline of 22% last year. And so basically what that means is banks are saying, "We're actually not keeping up. We're not moving as fast with our internal initiatives as fast as the pace of AI," which is extremely fast. And so much so that only two out of 10 are really seeing deployment in even the area of agentic.
(03:47):
And really why is that? Our observation has been is that there's so many siloed systems, teams, and data. It's very difficult to orchestrate AI across those many divides. And it's stalling the adoption of digital programs and strategies. 70% of the banks that we surveyed say that they have been unsuccessful or not successful in fully implementing their digital strategies. But in the data, there was a cohort of banks that were answering differently. And so we double-clicked on that data and took a look at why and what they were answering against the different questions. 97% of the cohort, roughly 30% of the banks, reported revenue growth directly tied to their AI strategy, and 65% agreed that they're operating with a clear shared AI vision, which is incredible. So we're going to spend some time with Jennifer and Janna and asking them their opinions. And we'll have some time for Q&A at the end.
(04:52):
So if you have any questions, we'd be happy to take them as well. But I'd like to start with you, Jennifer, because you've been in this industry for so long and you work with many of the leading banks. What is your perspective on the biggest issues and obstacles that banks are facing relative to AI adoption?
Jennifer St. John-Foster (05:10):
Yeah. So just to clarify as well, working with the leading banks: within NVIDIA for all of North America, we only actually work with 10 banking customers. Within NVIDIA, we define it and classify it as "the builders of AI." So the top large banks will build their AIs, and then all of the other banks that we don't cover will work with partners like a ServiceNow, engage with some of their platforms and workflows, and engage with our ecosystem. So a lot of my perspectives are with the builders of AI that we're working directly with. I would say the biggest challenge for banks in terms of adopting AI is really silos. It's corporate silos within the banking organization. The lines of business have cultural silos, and the lines of business also have data silos. And so when we engage with them, we want to help them build an AI capability for the entire enterprise.
(06:02):
You may hear "AI factories" out there; that's the big buzzword. This AI factory could live within the bank's data center or it could be out in the cloud. There's a lot of these neo-clouds out there like a CoreWeave that are popping up. An AI factory is an enterprise-wide capability with a common set of software tools, libraries, data sets, and ultimately repeatable pipelines. That's a very hard thing for banks to adopt. It's actually being more challenging than we anticipated in terms of coming together and sharing a common set of tools and software sets. So I would say the biggest challenge right now is helping them break down these silos.
Kristin Streett (06:42):
That's great. Janna.
Janna Wagoner (06:44):
Yeah. From my perspective, I validate those silos. Living in a bank, I think a lot of us can probably validate that happens, but I think it's the culture across all of that—really understanding what's happening from sales all the way through servicing and how do we not have so many systems. I joked with our vendors here, you're all talking to us and then you're all saying you have AI. And so you go away and we come back into a room and we go, "Did you get sold AI? And what is it? Is it generative? Is it agentic now?" It's similar to cloud where you're just working through exactly what that means. I think it is just overwhelming, and it takes time to figure that out. We know that unlocking AI is so important with that common foundational data set.
(07:30):
Until we get those conversations and those silos broken down, I think that's it. I will say though, as a caveat, at least for my bank, it's not for the lack of desire to want to. No one's saying AI isn't for us or it's dangerous or it's risky. It's just the way that we're set up.
Jennifer St. John-Foster (07:48):
Our customers are actually confused and we're part of the problem, right? We're an AI vendor, we're part of the problem. What we actually say to our customers is "do it all." Because customers say, "Do I engage with an OpenAI API? Do I engage with these frontier models like Gemini? Do I get the latest open source model from Hugging Face and use the NVIDIA software stack to actually do the data flywheel and train my data?" And customers are doing it all because they actually don't know who's going to win.
Kristin Streett (08:15):
Yeah. I think that it can be extremely overwhelming. So definitely hurdles in terms of overcoming the internal divisions—the actual lines of business—but also connecting that data and then just understanding and keeping up with the actual technology, which is changing at an extremely rapid pace. We're going to move in a little bit more to some of the insights that we had. So for those banks that are overcoming some of those hurdles and are getting to a place where they're actually executing on AI, what is it that they're doing first? Where are they focused and what might be a good next step for you? Those are the things that we're going to discuss here in the next couple of minutes. So in the data, what we recognize is that Pacesetter banks—those that are actually implementing and seeing return on their AI initiatives—are focused on three things.
(09:06):
The first one being talent. 55% strongly agreed that they have the right talent mix versus 33% of the overall cohort in this study. Second is that they're focused on governance, which is huge. We'll break these all down and go through them one by one. 73% of those surveyed have addressed evolving data governance and security concerns. We'll have some conversations about what that looks like and what Jennifer has to say about that. And then finally, they're really embracing Agentic, which is an evolution of AI. It seems like GenAI is long gone, even though it isn't. It's still being used in many of the banks that I work with and it's a very solid strategy for very specific use cases, but the conversation has totally pivoted to Agentic. And so for those that are still working through strategies or trying to get a foundational baseline set up, it's like: how do we even think about Agentic when we're just barely thinking about generative AI or AI use cases?
(10:12):
So we'll talk a little bit about that.
(10:16):
Okay. So we'll get some reactions from Jennifer and Janna on this data. Pacesetters, as I said, are focused on talent. They're training and upskilling their internal employees, encouraging them to use AI capabilities within the confines of that organization's risk appetite and what they feel comfortable with. They've identified AI champions and they're hosting learning events. ServiceNow actually hosts a lot of internal learning events. We're learning and our company is encouraging that from a cultural perspective to say, "Hey, you need to be using AI externally in your personal life just so you're getting familiar with it. We need to help you understand what prompting is." Even those basic blocking and tackling type educational programs are extremely helpful in getting the talent upskilled. So you'll see where the Pacesetters are answering 81%, 68%, 77% across those areas versus their peers in the same group.
(11:23):
So for you two, what's the balance between using AI and human in the loop? I think many times the people that I speak with are saying, "Is human out of the loop? Is it completely autonomous?" Agentic is basically autonomous agents working on your behalf against a task. So what's the balance there between a human being engaged?
Jennifer St. John-Foster (11:47):
I would say today, human in the loop is a critical strategy and it's still the trend. Going back to your earlier comment about talent: talent is key. Talent is why at NVIDIA, we only cover the top 10 leading banks. We need customers that have the talent within their bank to work closely with our team. Within NVIDIA, the data scientists that we work with, the solution architects, are actually true practitioners. They're actually training large language models and creating these examples, and we will only be successful because we're such a lean organization working with banks that have the talent within their group as well. Talent is key. I would say that human in the loop is critical, especially if you think about AI agents and where we're going. Generative AI is less autonomous and so a lot of the impact is internal, but yeah, human in the loop is still the largest trend.
Janna Wagoner (12:45):
Yeah. Couldn't agree more on the human in the loop. I think we have to be adaptable. If you're not worried about AI taking your job, you just have to think about what this means for my next step. You will have HR strategies that are managing Agentic. It's weird, but if you manage agents today, you're going to manage Agentics. So what is that skillset of talking and training a human to talking, training, and overseeing the AI itself? I think that's really important to make sure that we're focusing on when we are looking at talent: their ability to embrace the next way to actually do this. And it's really not different when you actually break it down to just what it is—you get the natural skill.
(13:36):
I think it's less technical and more about how this actually all works in our business and lowering that fear factor.
Jennifer St. John-Foster (13:43):
The talent is in this room, which is exciting. Within NVIDIA, outside of the cloud providers (the CSPs) and consumer internet (the Elon companies, Meta), financial services is the number one industry. If you think about financial services, you have the talent. Outside of big tech, they're at the largest banks. The reason being is if you think about the data scientists—the original data scientists—it's the quants in the banks, it's the model builders. AI is simply models and algorithms. It's data coming into models and algorithms and tokens coming out. So the talent is in this room.
Kristin Streett (14:22):
I'm so glad that you said that. I was hoping you were going to bring that up because I found that really fascinating. I think previous to the evolution of AI, from a technology perspective, it seemed as though most of the talent was going into other industries. It's really refreshing to hear that no, they're coming to financial services. So we have this great opportunity in front of us to be able to harness that talent and really begin to build. Especially with the quants and the model builders, you're right, many of them live in our institutions inside banking.
Jennifer St. John-Foster (14:55):
DeepSeek, which was the first reasoning model introduced in January of this year—it's no surprise that DeepSeek was actually developed by a hedge fund, HighFlyer in China, who happens to be one of our largest customers. But to emphasize on it being challenging for banks and other enterprises to adopt the technology, DeepSeek was introduced to the world in January. In March, which is NVIDIA's GTC conference, Jensen, our CEO, announced Dynamo Triton for inferencing reasoning models just two months later. So the pace of innovation in AI is so hard for even us "Nvidians" to keep up with.
Kristin Streett (15:35):
I think that's also comforting to hear from my perspective because we're all trying to stay up on everything and it's really, really hard. You just have to have an openness to say, "Okay, look, we're all learning. I have to be open and receptive to the new information that's coming in. Try to learn it, share what you know, help somebody out and make sure that you're learning together as a team," and that's helping with that talent perspective. So moving on to the second piece that we noticed Pacesetters are focused on: 62% are focused on their governance progression. And so the data here—you can see these little wheels—you'll see 81%. 81% have assessed their internal AI applications and they understand the fundamental data requirements that lie underneath, compared to 53%.
(16:29):
Knowing where the data lives, how it fits into the process, and how that information is actually being used in the process is pretty critical to being able to set up an AI process or anything that's going to be processed by an agentic agent. So 81% are doing that now and they understand that, which is pretty significant. 63% have designated teams that are focused on drafting their AI policies. They have inventories. They know where AI is being used within their organizations, and they're focused on the processes that AI is touching. So they understand the potential risk implications of using AI in those experiences. And they are also—versus 45% of others—looking at the fairness of those practices using AI. How is there an impact to a potential customer-facing experience?
(17:36):
I don't know if that sounds like that's way out and sort of outside of what many of you are working on—I'm interested to hear in the Q&A when we go there—but it's significant to be focusing on the governance. A lot of our banks that we talk to have internal AI governance boards and organizations. They have leaders and executives that are now being brought in and are working through these things. Finally, 73% have formalized data governance and data privacy practices compared to 41% of others. That's significant. That's almost a 30-plus percent difference. So we're going to do a little bit of a deep dive here around how banks are navigating the regulatory landscape. Are we seeing a lot of pressure yet from the regulators around AI? And how do you balance your responsibility to the regulators around the soundness of your programs while you're trying to push the boundaries and keep up with technology?
(18:35):
What do we think?
Janna Wagoner (18:36):
Yeah. With any regulatory interaction, consistency is key. Making sure that you're giving a consistent message around embracing AI is the best thing you can do because then you can innovate underneath that and it's all AI, right? To the point, it's so fast, you just have to be really purposeful on that. I think the other piece is in your first, second, and third line structures, ensuring that you have the knowledge up and down that piece. It's really important to make sure that if you think about who's interacting with your regulators—it's your first line and your exams, but then you have your second line and your third line—making sure that you're there. It's really important to have a consistent understanding of what AI is so that you're speaking the same in the rooms with whatever regulatory body is coming to talk to you.
(19:26):
I think that allows you then to have a little bit more freedom underneath the covers to do AI innovation as long as you all know you're talking the same language and that you know what's going on. So I think that's a pretty important part of it.
Jennifer St. John-Foster (19:39):
One of the things we're starting to see be developed within banks as they bring on this enterprise-wide capability of an AI supercomputer is this "model as a service" platform that they're building in-house, and it's all focused around governance, policy, and security. Where we miss these conversations with our customers is a lot of time we bring them to our corporate headquarters, give them a tour of our supercomputer, and show them how NVIDIA does AI. The banks are like, "No, we can't do that. We can't just let our data scientists go to Hugging Face and download the latest model," which as you may know, has a new model that outperforms every single week. So banks are developing their own model as a service platform where they have validated models. Maybe they'll go and validate 20 or 50 models, which is a combination of APIs from OpenAI, the frontier models like Gemini or Claude, or the open source models.
(20:30):
And so there is that model validation which is critical.
Kristin Streett (20:34):
Yeah. We announced this at our annual conference this year, but we have observed that the number one hurdle for adopting AI really is understanding and working through security concerns. From our perspective and the customers that we speak with, that tends to be the very first gate that they're trying to jump through. ServiceNow is also looking at ways to help our banks adopt AI more quickly. Is there a way for us to slow down the pace of our release of model information so that it can be consumed by internal model risk teams so that it can be evaluated, assessed, and maybe in a sub-prod environment before it's pushed into production? So we're trying to pause the universe in a way and help our customers consume AI in a way that's acceptable and works within their own organizations.
(21:29):
Awesome. We're just going to keep us going here. So this is kind of the fun part: embracing the power of Agentics. So what are the Pacesetters doing? 51% are very familiar with Agentic versus 26% of the cohort. And then the percent currently using Agentic is 41% versus 18%.
(21:56):
Pacesetters are developing and using Agentic. I have spent time with some of the Australian banks, which are extremely innovative and looking at AI across multiple business use cases, and their questions of our team and their demands of us have been really impressive in terms of their embracing of Agentic. So they're monitoring and interacting with internal systems at 51% for Pacesetters, monitoring and addressing cybersecurity alerts—again, they're taking that security issue head-on. They're building bespoke products and services working with Agentic, and they're acting on customer inquiries at 51%. That's huge. I think that a lot of our banks here in the US are a little bit slower on the customer-facing Agentic use case side, mostly for security and privacy concerns, which are extremely valid. So let's move on to the questions here. So knowing that Agentic is the new frontier and many of us are just working on AI or even generative AI, how do you get started?
(23:08):
Where do you start? The Pacesetters are focusing on talent, on governance, and their openness to Agentic, but Jennifer, what would you advise to those in the room that are trying to accelerate their programs?
Jennifer St. John-Foster (23:21):
We at NVIDIA love having the Agentic AI conversation. Over the past two years, everything's been focused on GenAI, which is great, but the benchmarks that are starting to come out within the banks are mostly productivity gains, back-office ChatGPT functionality, which is great for a bank. Agentic AI is a whole new opportunity around front-office, potentially revenue-generating banking agents. And that's very exciting. It's exciting because it actually allows NVIDIA to highlight our technique, right? Where you can get away with building your own ChatGPT functionality for a bank by using OpenAI's API and putting a RAG in front of it. There's a Microsoft white paper that says RAG gives you anywhere between 5 to 10% accuracy on your model. If you take an OpenAI API plus a RAG, you're only adding 5 to 7% additional accuracy.
(24:19):
NVIDIA's technique, our philosophy: you have to do the hard work. There's no easy button. And so it's the whole data flywheel that we've been preaching and talking about that I think a lot of our customers have been avoiding because it is the hard work. But now if you're talking about an autonomous banking agent that can reason, think, act independently, and potentially make decisions that will be revenue generating, there's no easy button. And so this data flywheel is taking the latest open source model—going to Hugging Face and downloading the latest Llama model—and doing domain adaptive pre-training. You're going to train it on the bank's intellectual property, which is your proprietary data. You're going to make the large language model one of your bankers. It's going to understand your products and your culture. It's going to understand that "options" within banking is different than "options" within a classroom.
(25:16):
And then the next step within that data flywheel is going to be fine-tuning. Now you're going to teach your large language model a skill. You're going to show it examples of how to be a research agent, how to do question and answering, how to do text summarization. And then the last step within the data flywheel is a RAG. And so that's really our technique: the data flywheel. It's doing the hard work.
Kristin Streett (25:39):
That's incredible. I love just hearing the practice and the understanding. So thank you for sharing that model. Janna, when you're listening to that, obviously that's impressive to be able to have that sort of programmatic approach. But even just the practicality of use cases or POCs—a lot of folks are in POCs and they're trying to elevate out of just trying to do these quick tactical use cases and trying to look at something that's more enterprise functioning and can get funding internally. That's what I'm hearing. Where's the best place to start from your perspective? You can take it from a use case perspective or anything that we've shared so far.
Janna Wagoner (26:19):
We started with GenAI as it relates to just customer inquiry. I think that's naturally the next step to get to Agentic, mainly because you've already had to have the foundational data that GenAI had to look at and assume would be next. You start to then train on that exact thing. I think the other piece that we're looking at is just servicing in general. We pride ourselves on being a relationship bank. So I think personally, we're probably a little bit further away from the sales and generative revenue side, but as soon as you're in, we're always looking for ease of service. What's the next question you're going to have? If we can train agents to be able to answer those questions... the other thing we also have internally is a place where our bankers can actually type in questions and ask our servicing layer, "Hey, what do you do in this situation?" and it helps us on a talent perspective.
(27:19):
I think you can probably squeeze some expense out of that by turning that to more agentic versus having a human behind there. But again, instead of making that scary, how do you tell the folks who are answering on the back: create the content that you have—however you learned is how you have to train your model. We just have to get back to basics. We say all these big fancy words and it almost feels like a technology thing. I think a lot of people say, "Janna, I'm not technology." But you are because if you came in and you trained an agent to do something, all we are talking about is training a model to do that exact same thing. You still need oversight of that.
(28:00):
Servicing is just a good place because we're great at training people all the time, right? Turnover's there. I think we're a little bit further away from the direct decision making, especially around credit decisioning and other things along those lines just because of the regulatory nature of it. You just never want unfairness to be there. But I think in that customer questioning and servicing is where I would start, just because that's where we started with GenAI.
Kristin Streett (28:24):
I think what I've heard echoed in our customer advisory board, which is comprised of banking executives that advise us on our strategy, has been contact center because there's containment and the data doesn't have to be perfect. And so a suggestion was that's a potential area to look at in terms of stepping forward. But I think you said something really important: generative AI is a huge step in the right direction. Just summarizing provides so much value and gets the employee—regardless of line of business or use case—so much more information at their fingertips to function better. Just not having to go to five different systems is a huge step. And so we're trying to employ these agentic use cases and they sound amazing and cool, but generative AI is solving a significant amount of challenge internally from my perspective.
(29:19):
So yeah, thank you for sharing those use cases. It's hard to know where to start. And then when we get to the Q&A, I'd love to hear if any of you have any really interesting use cases that you might be able to share for this audience. It'd be awesome to hear from you. So I am going to just bring up on the page here—you've got it in your chair—but this is an actual ungated download of the AI maturity index and you're welcome to take it with you and share it with your teams internally. Additionally, there is an assessment. You can go in and actually assess your own organization versus this data and see where you fall, which I think is really helpful from a prescriptive perspective to be able to gauge your baseline and then know where you might be relative to some of the different Pacesetters.
(30:13):
So I would like to thank you both for just your awesome thought leadership. Janna, thank you for the practicality and the realism of being inside of a bank. And Jennifer, absolutely amazing thought leadership from you. You're brilliant. I want to follow you around in your meetings. I'm learning from you as you talk, so thank you for joining us. But I want to open up, if we have some time before your next session, for any Q&A. We've got a microphone here and I'd be happy to share mine. Anybody want to ask a question or share an insight? Yes, right here. That's awesome.
Audience Member 1 (30:57):
For mortgage?
Kristin Streett (30:59):
The question is, do we think we'll have a situation where for mortgage, agents would be licensed and therefore able to run particular processes or function in client-facing capacities?
Jennifer St. John-Foster (31:15):
That's a really good question. I've actually never heard anyone take it to that level of being licensed. I would say where we are right now is observability is key. So you're starting to see a lot of these vendors in the ecosystem or something homegrown—these observability tools—so that if somebody actually gets declined a mortgage, you have to be able to explain why the model declined the mortgage. I think for a long time now, if you talk about front-office revenue-generating agents, there's always going to be a human in the loop, at least for the foreseeable future.
Janna Wagoner (31:47):
I agree, but I don't want it to be out of the realm. I think it goes back to what I was trying to hit on earlier: the way that you're certified today, maybe an agent would be, but the human has to be certified in something else. So now instead of the human being certified in mortgage, they're certified in the validation part. I think your certs and what a cert means is going to be on a spectrum. As long as you're aware of where the human has to stay in the loop, you'll naturally have that as the cert, and the cert today might be at an agent level.
Kristin Streett (32:23):
I think one of the things that's interesting in that question is that when you think about how agents are built, what I'm observing is that it isn't one agent doing 15 things. It's an orchestrator for a process and a master sort of agent that sets up and multiple agents underneath that are running tasks within the confines and the policies of your organization that say, "This agent has the ability to do this task, to take this first step in the customer onboarding process or the customer inquiry process." That's it. And they work within these rules and they only can access this data and they only can surface these specific functions. Then they pass to the next agent. That agent takes this task. And the more simplified that the task is, the faster the orchestration can be because it isn't trying to run multiple things at the same time against the data or the LLM.
(33:24):
So I think when you think of it from that perspective, there is an opportunity to bring a human in the loop because there are so many steps in one of these processes that would be built. But I love the creativity and I think that is exactly what we need in any sort of organization is just stretching the limits, looking for ideas. Where would this be helpful? Could we do it? ServiceNow is forward-deploying engineering resources to help our customers who have these brilliant ideas and they want to get started, but they need assistance. And the more that we can help our banking customers—really in any industry—to get live with Agentic use cases, it creates a new opportunity for someone here in the room to do something similar.
Janna Wagoner (34:13):
I'm going to go on the record and maybe in a couple years we'll see this. It's either going to be in schools or I'm waiting for a job board to say there will be a role called Agentic Orchestrator. You will be hired in to be an Agentic Orchestrator. And so it's one of those things where as you see where this goes, the jobs will change and the ways that we do things will change. But if you think about exactly what you're saying, it's all pieces of a process. As long as you stay process-oriented and think about the best way to do that, you will have roles unlocked like that. Maybe your cert's at that level, right? "I'm a mortgage agentic orchestrator," and you start to see how that world could really unlock. Love it.
Kristin Streett (34:56):
Oh yes, over here. It's right here and then behind. Yeah.
Audience (Annette Stewart) (35:01):
Hi everyone. My name's Annette Stewart. Samsung. We had an event on this and I want your thoughts because we teamed up with Google. We talked about AI and I want to throw this out as a use case when we talk to clients. The big issue was there were no rules for the road on how your firm does this, how your bank does this. And so I think the biggest issue for us is if we don't set rules, then someone's going off a cliff here and someone's not even moving forward. Love your thoughts on that because I saw that with some major banks last week—some people use ChatGPT even for notes and some people aren't allowed. Some people do this, some people do that. So what are your thoughts as you think about AI, the rules? Especially with this audience here, how do they go back and what are the rules? Maybe she's doing this, I'm doing this, etc.
Janna Wagoner (35:55):
I think you're hitting on exactly our intention of the governance question, but each bank has to take their own risk posture on that. I actually think that's a little bit of a competitive advantage. And hopefully what you guys take from this today is: how do I take this information and go and influence what those roads or the rules of the road are at your bank? The ones that are going to take a little bit more risk are likely going to see some progress, but I think you're also going to have the people that sit back. It goes back to just getting all the people in the room. I go into our first, second, and third line of defense and just getting them to understand: what are we willing to take on? And then that's your guidepost.
(36:39):
Once you start to see success in maybe a small guidepost, I think then you can start to broaden it, but you have to start there. Or you're going to have naysayers, you're going to have a model go off the rails, or you're going to make a decision and that's just a foundational thing you have to talk about.
Jennifer St. John-Foster (36:56):
Yeah. It's public knowledge that a lot of the top global banks were fined billions of dollars for using ChatGPT when it first came out. And so they built their own. JPMorgan Chase's LLM suite is essentially they built their own ChatGPT functionality to overcome that. I would say one of the things that's interesting that's in the forefront of rules is commercial licenses for using open source models. We use open source models. That's how we embrace LLMs within NVIDIA, but a lot of our banks are coming to us saying Meta won't help us in terms of this commercial license for Llama. So if we build an AI that is revenue-generating or is an agent for the bank, could we get sued somewhere down the road? So that's kind of a number one, top-of-mind topic as it relates to rules and how we build AI.
Kristin Streett (37:49):
Awesome. Did that help? Awesome. Yeah. And we had a question back here.
Audience Member 2 (37:56):
Good morning. You keep mentioning Pacesetters, right? And just looking at the profile of what the ideal Pacesetters are doing from talent to governance and what they're doing as far as their data is concerned to adopt agentic AI—who are these Pacesetters?
Kristin Streett (38:14):
Yeah, I think everyone's always asking: who are the actual people? We don't actually disclose who the exact banks are, but it is a global survey. Those that are surveyed are director and SVP and above level individuals. I could say that some of the banks that we work with where globally there's a little bit more innovation, as I mentioned, was in the Asia-PAC area. In Australia and New Zealand, those banks are leading, in my opinion, based on my exposure. They're just incredible in their adoption of Agentic and the ways that they're moving very quickly. I want to be able to just open the layer for you and say who they are, but I would say they're probably tier two, tier one, and some of our global banks. It's probably a little more in the range of 165 billion-ish, but I can't name exact names that are in the survey.
Jennifer St. John-Foster (39:21):
I'll just say this generically: my team covering and working with the top banks, I'll say it this way—if we look at where a lot of the talent is going (the talent outside of big tech and banking) and where we see peers at the same capability in terms of data science talent, we see a lot of them at Goldman Sachs and we see a lot of them at JPMorgan Chase.
(40:10):
Another thing that's starting to present itself—we try and drag this out of our customers when we meet with them, and we're starting to learn from them that a lot of these Pacesetters within banking are starting to train their own foundational models, which is critical. We've been saying this for years and they kept saying, "We'll never build our own foundational models." Now we're starting to hear that a lot of these banks are building their own payments foundational models.
Kristin Streett (40:10):
I love that. And I think the flexibility of being able to access external LLMs within the framework of what is acceptable in your organization—that flexibility and the ability to tap into particular models that are deemed to hit the criteria or some sort of risk profile—is important. I love that a lot of banks are really trying to experiment with this. They're doing it in sub-prod environments and looking at use cases, like I said, in contact center. Anything else? Other questions?
Janna Wagoner (40:52):
Kristin, I want to add on something you just said that I think is important depending on what role you play. You think about your procurement departments or if you are actually purchasing or buying these things, the types of questions you have to ask are exactly that: What is your model? How did you train it? What does it look like? So just another tip: if you're going into this, your questioning of vendors has to change. And then it also goes into the licensing structure. Not everybody who's doing modeling on the ChatGPTs or the Copilots or whatever... Microsoft definitely broke into it so you can have your own license outside, but you almost have to say, "Can you disconnect this from the internet? What did it learn on? How often am I getting model updates?" Just another thing to keep in mind that your questions of bringing this in should be part of your governance model and then pervasive throughout your normal processes of onboarding.
Kristin Streett (41:50):
I think an emerging trend that we're observing from our customers is the challenge of being able to harness an inventory. Where does AI live in my organization? Who's using it? Which vendors are providing it? How is it operating? Where is that being stored? Where are you looking at that? How are you observing that? How are you reporting against that? ServiceNow has been working with our teams to help our banks find a way to put governance around the universe of use cases in the organization. We call it AI Control Tower, but it provides a lot of oversight and the ability to track, which LLM, which agent, what's it doing? Are we using Microsoft Copilot? Where? What use cases? All of those use cases and scenarios are really, really difficult to manage in extremely large organizations.
(42:47):
So getting your arms around the inventory as well. Do you see anything with the larger banks on how they're keeping their governance models around all the various use cases and things that they're doing?
Jennifer St. John-Foster (42:59):
I would say it's still a lot of red tape. We bring these capabilities, an enterprise-wide AI factory capability, to a large bank with AI tools and libraries, but then the bank tells us it still takes 11 months to get a use case through the GenAI governance process. It's still a big problem.
Kristin Streett (43:22):
That's what I thought. Any other questions? It's so great to see everybody in this room and just have this discussion with you. Hopefully you've found this insightful. Yes, right here.
Audience Member 4 (43:43):
As an individual who's looking to just make AI more habitual—the ChatGPTs, the Copilots of the world—what do you guys recommend to just really embed that more in my everyday practices?
Kristin Streett (43:56):
I will go first. I'm an experimenter. I will go into a system and just start pushing buttons and see what happens. I love that question. I'm also somebody who greatly prides myself on being able to read things and synthesize my own opinion. So it's very hard for me to just go, "I'm just going to let ChatGPT tell me what to do." I have to really let go of my own sense of needing control to just embrace it. But I go into it now—we have the opportunity and I'm grateful for it—to be able to use Microsoft Copilot internally. We also have our own internal AI tools, but I've been going into Microsoft Copilot and just asking it to take something that I've written and redo it. But what I have found that's difficult—and if you're somebody who likes to write, I'm just speaking from my personal experience here—the whole part of the writing process is understanding what exactly you want to say, how you want it said, and the way it's going to land with that audience, and that is prompting.
(44:51):
If it is difficult for you to think through that process, you can just say, "ChatGPT, just figure this out for me." But invariably, what's underlying that is a very direct, prescriptive set of tasks that you're asking the ChatGPT to do for you. So it's really a balance. Sometimes I go in and I'm frustrated with it and I get kind of annoyed because I feel like I can just do it faster myself, but I've been finding more and more if there's something that I need to summarize really quickly, I've been doing that. I've just been letting go. I've been embracing more virtual assistance and trying to just go through anyone's.
(45:45):
If I'm calling into my bank, if I'm making a call into pay my medical bills, I force myself to use those tools and just make it a part of everyday life. I will say that the internal teams at ServiceNow are focused on prompting—like how do you write a good prompt? And so I'm trying to focus there personally. What do you guys do?
Jennifer St. John-Foster (46:06):
You should use it every single day. We use ChatGPT. We also have access to Perplexity Pro within NVIDIA. Anything from writing a job description to—if I've got somebody new on my team who's meeting with a new customer—I'll ask Perplexity like, "Give us the top use cases for asset management." We use it every day. A lot of our data scientists and solution architects use it for coding and code generation. It's limitless. I would absolutely encourage you to unlock the power of using AI.
Janna Wagoner (46:38):
I have personal stuff and work stuff. Personal: I love DIY. I'm a big Pinterest girl. I have all these things pinned and then I'm like, "How do I do that?" I've used ChatGPT. I'm like, "How do I do a board and batten wall?" and it gives me all my stuff and it's like, "Would you like me to create a list at Home Depot where to go?" You almost go down this—instead of doom scrolling—you're down this rabbit hole and next thing you know, I'm on my way to Home Depot getting all my stuff. That was one. The other thing I did on ChatGPT: if you have it personal, I would go ahead and invest in the $20 a month thing, just do it. I do that as well. If you're doing Netflix and everything else, just add it to the subscription list.
(47:24):
But as I'm driving, I have my best ideas, right? I'm talking out loud to myself—record yourself. I've recorded myself and it's this giant thing. And then I put it in ChatGPT like, "What was I trying to get?" And then you prompt your stuff: "How would you say that to your boss and not offend them?" or "How would you say that to your husband and not offend him?" You can use it in so many ways. If you realize "I'm driving and I'm saying all this stuff in my head," it's actually a nice therapist—get it out. And you kind of can learn that and then two birds, one stone. At work, same thing.
Kristin Streett (48:05):
We use it a ton in our meetings. Everyone's got different notes—six people in the meeting have different sets of notes. Just throw them all together and have it organized and summarized in terms of action items. We're using AI companion in our sessions. For me, it's just letting go and letting the technology do some of the work for you. My personal opinion is sometimes I feel like it's not detailed enough. I'm always like, "This is very high level," and there's nuance in my perspective to some of the things happening. And that's where the human in the loop comes in. I just use it as a "get me started" and then I can go in and layer the very finite details that are important to me to express in some of those meeting summarizations.
Janna Wagoner (48:51):
It's funny. So how do you prep for a board where you're kind of... we're all on different states. So we looked at this and we all started talking a couple weeks ago and it just recorded everything we said and we're like, "That was way too long-winded." We were going to bore them. And so then we took it all and put it in and were like, "What were the three things I was trying to say?" So even prep for this—we just made sure on a little prompt here that we had the three things we wanted to say and then everything else is a little more conversational. We use it for this.
Jennifer St. John-Foster (49:23):
I think the coolest example of prompting recently—a couple months ago, did anyone else participate in prompting ChatGPT to make you a character in a toy box with accessories of golf clubs, a tennis racket, and a bottle of wine? No, but I wish I did. That was so cool. It made you like a little toy box character and you loaded your picture and it looked like you.
Kristin Streett (49:45):
How about Mahjong? Was there a Mahjong option? That's me.
Jennifer St. John-Foster (49:48):
That would have been an accessory you would have prompted to put in your toy box.
Kristin Streett (49:52):
I love it. Anything else? Any ideas or use cases? Yes. Right behind you. I thought you had a mic. Sorry.
Audience Member 3 (50:05):
How do you balance privacy?
Kristin Streett (50:07):
How do you balance privacy?
Audience Member 3 (50:10):
Yeah. As you just mentioned, you just upload your photo and you've got your little toy blocks and all of that. How do you balance privacy with all of these ChatGPT? So this stuff you're putting into ChatGPT, are we confident that it's not being shared anywhere and that the data is secure?
Jennifer St. John-Foster (50:29):
I think that's why a lot of the banks got fined and are building their own capability because it's absolutely correct: if it goes into ChatGPT, they're going to be using that to train their own data. And that's why a lot of the banks currently have decided to use ChatGPT—let's say a GPT-5, OpenAI model—engage with the API and put a RAG in Azure because the alternative is that they would have to give ChatGPT OpenAI their data and it says right in the terms that they will actually use their data to train the model.
Janna Wagoner (51:01):
On the business side, I think that's where the governance comes in. On the personal side, I look at it this way: if I would upload a photo to Facebook, LinkedIn, if I would share an article, if I would give my opinion (which we all do anyway), ChatGPT already has it. That's how it came here. So it's already crawled those things. So if you're just giving it information to give you points back, your privacy is already out there.
Kristin Streett (51:24):
I think it's an important question. I really appreciate you asking it. My daughter uses ChatGPT in school for specific functions; it's part of their curriculum. I had to just pause for a second and I said, "Absolutely zero personal information goes in that." And she's like, "Mom, it's just school." I was like, "I'm just telling you, because they get creative and think it's fun." And I just said, "Just for me, just please don't." I think that it's an important question to make sure you understand how the tools are being used. And I think that's part of what's so beneficial for the bank's overall governance strategies and the thinking going around how it's being used and trying to be very protective of customers' privacy. Are we good? Yeah, we'll take your question and then I think we'll wrap with you.
(52:23):
From a governance perspective, how do you address hallucinations?
Janna Wagoner (52:33):
A couple of things. One, you need to know, and the risk posture you're going to take on hallucinations is something that is absolutely real. And that's where the use cases you use—you have to say, "Where am I willing to take on the risk of that?" At the same time, once you start to learn about models, you can tune them so they hallucinate a little bit less. You almost have to say, "If I'm using this LLM, how do I tune it?" One of the RAI guys is amazing, he's like, "The sky is blue." That means you have yours tuned all the way up so that 99% of what's going to be said next is that. If you tune it to be a little more risk-adverse, you might say "the sky is high" or "the sky is cloudy." You almost have to understand what the models are doing and then adapt your risk perspective to where it is. Look, you're always going to have hallucinations. If you know every other bank or company is taking that risk and you have the ability to defend it, that's where your governance structure comes in.
Jennifer St. John-Foster (53:45):
NVIDIA has tools like guardrails. So we've developed guardrails to help with hallucinations. We actually train our foundational models from scratch—not because we're trying to compete with our largest customers like an OpenAI or a Google Gemini, but we need to know how to train large language models from scratch so that we can build tools for our customers like guardrails to help prevent things like hallucinations.
Kristin Streett (54:10):
Awesome. Thank you so much for your time. It's so nice to see a full room and we'll stick around if any of you have questions. Thank you so much and wish you the best conference. This is such an amazing event and I hope you all get everything you need out of it and the thought leadership is able to be shared back to your organization. Thank you so much and have a great day.
