Podcast

What a 'moral architecture' for AI in banking would look like

Surjit Chana holding a microphone
Surjit Chana, director at Beneficial State Bank, a Harvard Fellow and a tech committee member of the Global Alliance for Banking Values

Transcription:

Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Penny Crosman (00:03):

Welcome to the American Banker Podcast. I'm Penny Crosman. In a recent opinion column, Surjit Chana, a board member of Beneficial State Bank, a Harvard Fellow and a tech committee member of the Global Alliance for Banking Values, explained why there's a need for what he calls a moral architecture for AI. "AI is already learning finance, absorbing decades of data shaped by traditional market priorities and increasingly teaching finance through automated decisions, chatbots, and risk models that reinforce those priorities, he wrote. But it's replicating outdated modes of banking. Many models are trained to optimize risk-adjusted return using historical data that embeds decades of structural bias. We shouldn't be surprised when the output mirrors the past. What we need instead is a moral architecture for financial AI, a framework that governs how algorithms are designed, trained, and deployed, grounded in the idea that finance as a social contract as well as a business."

(01:08):

Surjit, thanks for coming.

Surjit Chana (01:11):

It's great to be here, Penny. Thank you for having me.

Penny Crosman (01:13):

Sure. So what brought you to this point? Why do you care about creating a moral architecture for AI?

Surjit Chana (01:21):

First of all, let me start with what brought me here, Penny. Having come from very humble beginnings, I had the privilege of an amazing 30-year career in technology, including 10 years as a member of IBM's senior leadership team. Over that long career, I had the opportunity to run IBM's supercomputing business, which is about a billion dollars. IBM's mid-market business, about $5 billion, and was IBM's chief marketing officer in Europe for the $18 billion business that IBM has in Europe. One fun fact I would mention from that career that is relevant to this conversation is that my first patent two years after graduating physics was an early version of AI called expert systems. I developed a solution on a mainframe computer in mainframe assembler code that essentially captured human intelligence from IBM supports teams to automate problem solving in an expert system. IBM, I think, released a product in 1992, if I recall correctly.

(02:24):

I do joke with my three adult kids that much of the thinking on AI has been around for decades. And we have had many of these ideas for a while. But as that young 25-year-old, when I got my first patent, the only thing I was lacking was the internet, fast storage, cloud, and GPUs. Other than that, we were in good shape in 1990 to progress our AI thinking. It is beautiful to see how far the technology's come now that we have the internet faster, GPU's cloud, and of course, large language models. And to answer the second part of your question on why do I personally care about creating a moral architecture for AI? I think this also goes back to my humble beginnings, my deep technology career, and realization on how powerful AI will become and the profound impact it will have on society. I retired early and wanted to pivot to how I can help address societal issues.

(03:26):

I came across the Harvard Advanced Leadership Initiative Fellowship Program that helps senior leaders to do exactly that, apply their skills to tackle societal issues. I decided to focus my fellowship project on how technology in general and AI in particular can be used for social good. I realized very quickly the dangers of the financial services sector adopting AI without deep consideration of how to do that in a moral, ethical and responsible way. I decided I need to leverage my deep technology skills and understanding of financial services to provide some thought leadership on how to deploy AI in a moral and responsible way to help make sure this does not add to the societal issues we already have. So that's my journey and why I've been working on this moral architecture for AI and finance. I feel like I know too much about AI and its downsides not to provide this thought leadership.

Penny Crosman (04:27):

What kinds of potential harms of AI are you most concerned about?

Surjit Chana (04:31):

Four harms come to my mind. The harm I'm most concerned about is the automation of historical inequality at unprecedented speed and scale. This is not speculative future risk. It's happening now. When AI models are trained on decades of financial data shaped by redlining, discriminatory underwriting and structural exclusion, and then deployed to make millions of credit decisions per day, they do not merely replicate the inequalities of the past. They institutionalize them at a speed and scale no human loan officer ever could, and they do so with a veneer of objectivity that makes the discrimination harder to see, challenge or reverse. The 2025 Government Accountability Office Report confirmed this is not theoretical. The harm is real, it is present tense, and it's compounding. What makes this particularly concerning is the opaqueness problem. When a human loan officer makes a biased decision, there is at least a chain of accountability and a person who can be questioned.

(05:43):

But when an algorithm decides or denies credit based on proxy variables, zip code, transaction timing, spending patterns, the affected borrower often has no meaningful recourse. The institution may not even understand why the decision was made. The second harm that concerns me deeply is financial exclusion. The 1.3 billion unbanked adults globally and millions of credit invisible Americans. AI has the potential to either dramatically extend financial services to those populations or efficiently and permanently lock them out by optimizing credit decisions for populations that already have thick data files. The institutions that move first and move well on inclusive AI will capture enormous untapped markets. The institutions that don't will through inaction ratify a two-tier financial system, one with rich data and abundant access and one without. Third, I'm generally concerned about the workforce transition, but not in the way it's usually framed. The conversations tend to focus on the total number of jobs displaced versus created, and projections are like the World Economic Forum's data point that 92 million will be displaced and 170 million jobs will be created are often cited reassuringly.

(07:15):

But those aggregate numbers mask a distribution problem that the paper does flag. The workers most exposed to displacement are those in routine administrative and clerical roles who often lack the financial buffers, professional networks, and transferable skills to navigate the transition. The workers least exposed tend to be the ones who least need the protection. If the transition isn't actively managed with genuine investment in re-skilling, honest communication and workforce impact assessments before deployment, the harm will be concentrated precisely where institutions and society can least absorb it. Or this is the one I think is most underappreciated in the banking sector specifically, the environmental cost. Data centers now consume roughly 4.4% of all U.S. electricity and the AI workloads are accelerating that figure. For financial institutions, deploying energy intensive AI without accounting for this footprint is a major concern. Finally, the meta harm I think about is the closing of the window.

(08:32):

I think we have a brief moment where the foundational architecture, both technical and moral, is still being written. I think the window is shorter than most leaders appreciate. The harm of inaction is not merely reputational or regulatory. It is a foreclosure of a genuine opportunity to use the most powerful technology in a generation to expand financial access, reduce discrimination, and build a more inclusive financial system. That opportunity once lost is very hard to recover. Look, none of these harms are inevitable. They are the result of choices that are made right now, largely by default. The moral architecture exists precisely to make those choices deliberate rather than accidental. So those are the four harms I'm most concerned about. Automation of historical inequality, financial exclusion, workforce transition and environmental cost.

Penny Crosman (09:32):

So I think your first point had to do with AI models starting to redline by learning that biased activity from past loan decisions. Have you personally seen an AI model learn how to redline from human behavior?

Surjit Chana (09:51):

Yeah. I think the positive way to look at this is the work that the Zest AI have done. They're a technology company founded in Burbank, California, where the mission was to actually broaden access to responsible lending. They've been working with institutions, actually about 180 banks and credit unions that look at before you pay attention to the data versus when you pay attention or what results after you pay attention to the redlining and other data issues. They saw an increase of about 25% in approval rates while holding risk constant. So once you factor in the redlining and the data that's causing those issues, you'll see a significant improvement. In fact, they worked with a entity called Verity Credit Union in Washington State. When they ran the model with AI underwriting using 300 variables versus the traditional 15 that counters or allows for the discrimination in data, the results by demographic group were pretty incredible.

(11:00):

African-American approvals increased 177%. Approvals for individuals over 62 increased 271%. Approvals for women increased 194%. So this example clearly shows that when you factor in those sort of considerations, you're going to see a significant impact.

Penny Crosman (11:20):

Does that not show AI models being less inclined to redline?

Surjit Chana (11:26):

That shows when you look at applying the right type of AI with the right moral considerations, which is understanding that your incoming data could be contaminated or have those discriminatory practices built in, that you can counter that, but by focusing the AI execution correctly. Now, when AI is applied without those moral considerations, that's where the danger is. When companies like Zest AI factor in those moral considerations, then you see the advantages of AI in that environment.

Penny Crosman (12:01):

And I also thought you made an interesting point about workforce displacement, that the people most likely to be affected are the people who have the hardest time recovering or have the least financial stability. What are some of those jobs you're thinking of when you think of within banks? And do you see certain types of jobs just being totally eliminated by AI?

Surjit Chana (12:33):

Yeah, it's a really good question. So my assessment, which I think is that about a third of what happens in a bank can be fully automated with AI. A third can be augmented with AI and a third will remain human. So I think at the macro level, it's about a third, a third, a third, Penny. As I look at some of the tasks and roles within a bank, the administration, the clerical roles, all of those roles are exposed. And I think my view is if you systematically look at what each individual does in a bank, a lot of that stuff can be automated with AI. And my best assessment is about a third can be fully automated with AI, a third augmented with AI, and a third will remain human.

Penny Crosman (13:26):

So things like back office work, any sort of cubicle work, or I guess a lot of cubicle type work that involves data entry and form filling out, that sort of thing, I guess would be-

Surjit Chana (13:45):

Absolutely. Absolutely. I think what AI does is does go up the cognitive requirements here. So I think the initial focus of many of those roles are those exactly, Penny, but over time AI is going to go up the cognitive ladder here in terms of impacting more and more roles.

Penny Crosman (14:05):

And when you say third, in one timeframe are you thinking?

Surjit Chana (14:09):

So that's a really a good question. I am thinking within three years that we're going to start to see those sort of numbers.

Penny Crosman (14:17):

Wow. And you think a lot about morals and ethics. Do you think that banks, which have always been a pretty good source of entry level jobs or a way for people to get pretty good jobs right out of college, do you think they have a responsibility to try to not replace those kinds of jobs with AI?

Surjit Chana (14:42):

Yeah, I think the nature of the issue here is there will be displacement. I think what I'm asking for in the moral architecture is for banks to be thoughtful about that, provide the retraining, provide the re-skilling, understand what those implications are. And I'm being aware and being part of multiple technology transitions and transformations, new opportunities are being created, will be created. So I think it's the bank's responsibility to, first of all, deeply understand what the likely implications are, what roles will be impacted, provide the training, provide a career path necessary for people to progress. So I think the moral framework is really calling for banks to be very thoughtful in how they deploy AI, factoring in what the considerations are on the workforce.

Penny Crosman (15:44):

So we started out talking about this moral architecture idea that you have. What would a moral architecture for AI in finance look like?

Surjit Chana (15:56):

So in my mind, there are six pillars to this moral architecture, Penny, each with practical mechanisms for implementation by banks. The first pillar is fairness and bias, which we touched upon. In practical terms though, this requires rigorous bias testing before and after deployment, searching for less discriminatory alternative models, enriching training data through partnerships with CDFIs whose portfolios already reflect underserved populations and tracking disaggregated data about proof of rates and pricing. The second pillar I would mention is the transparency and explainability pillar. Look, if you can't explain why your AI made a decision, that system isn't ready for high stakes deployment. Transparency serves three distinct audiences, customers who need actionable information to improve their financial standing and challenged decisions as needed, regulators and auditors who require documentation for compliance and oversight, and finally bank employees who must be empowered to challenge and override algorithm decisions when necessary.

(17:11):

The third pillar is data privacy and security. The drive towards more powerful models cannot come at the expense of individual privacy. The moral architecture will establish clear boundaries around what data can be collected, how it can be used, who can access it, and how long it can be retained. The fourth pillar is human oversight and accountability. AI should inform human decision making, not replace it. Boards and senior management must treat AI as a core element of risk and conduct, not a technology initiative delegated to IT. Every IT system needs to have a designated owner and clear escalation path. The fifth pillar is environmental responsibility. Look, trading a single large language model can consume enough electricity to power over a hundred average homes for a year, and each AI data center consumes equivalent of a medium-sized city. For institutions with ESG commitments, deploying energy-intensive AI without accounting for its footprint is a fundamental contradiction.

(18:24):

A moral architecture would require us to assess the carbon footprint and favor energy efficient solutions. The sixth and final pillar of the architecture is workforce impact. AI is not just reshaping financial products and services, but also the people that deliver them, as we just talked about. Many of those jobs, as I mentioned, are very much exposed. So this moral architecture requires workforce impact assessments before major deployments, as well as investments in re-skilling.

Penny Crosman (18:58):

So would this architecture have very specific requirements? Is it sort of a broad outline of here are things you should be thinking about as you keep deploying AI?

Surjit Chana (19:14):

Yeah. My desired intention is to try to bring down the guidance under each pillar to a level which is very actionable and operational for banks to deploy. So of course, the desire is to provide enough level detail so bank can pick it up and be able to make sure it is deploying AI in the bank, in the moral, supported by the moral architecture.

Penny Crosman (19:41):

And is there a demand for this, at least among the global alliance for banking values or these banks that have an interest in showing up well in the world?

Surjit Chana (19:54):

Yeah, I think it does for sure resonate well with the banks that are values-based, such as the ones in the global alliance for banking on values. In fact, I had the opportunity just a couple of weeks ago to present about 40 of those bank CEOs and about 30 board members at the GABV annual meeting, and it does resonate very well. But Penny, I really believe that the moral architecture has to be deployed in the broader financial services sector. And I do think there's enough value for those banks to be able to understand why they need to be deploying this as well, in addition to the values-based banks.

Penny Crosman (20:41):

For sure. And are there any sort of immediate or long-term benefits that the banks could see? If somebody within a regional bank wanted to sell this idea within their company, like, let's adopt this architecture that Sergio's come up with, what's in it for them? What's in it for these institutions that are so profit-oriented?

Surjit Chana (21:12):

Penny, that's a great question. And really, I see six areas here. Overall, I think the business case is actually stronger than most readers, most leaders realize. To me, this is not about cost, but a competitive strategy. I think institutions that embrace responsible AI governance are better positioned competitively, financially, and strategically. The first of the big benefits I would mention will be the expanded market opportunity and revenue. There are tens of millions of Americans that are credit invisible. Globally, we have 1.3 billion adults that are unbanked. Look, the institutions that built AI capable of equally assessing credit worthiness in these populations gains a first mover advantage. The second area is around better risk management. As we discussed briefly, traditional credit scores use five to 10 variables. Machine learning models use 300, so you can have a much finer, a grained accuracy and deliver broader access. The deployment of the moral architecture also positions banks better from a regulatory perspective.

(22:24):

Look, non-compliance is expensive. The 70 million in combined penalties, Apple and Goldman, Sachs paid in 2024 is a prominent example, but the hidden cost of consent orders and years of enhanced supervisor scrutiny is even larger. And then banks that deploy the moral architecture, I believe will also see increases in investor confidence. An institution that can demonstrate robust AI governance signals to investors that is managing a critical emerging risk responsibly, whereas one with no or limited visible AI governance exposes itself to investor concern and potentially a higher cost of capital. Research from the global alliance for banking on values found that banks with strong ESG practices outperform their conventional peers. This also allows banks to build brand trust. This is foundational for the banking relationship and AI transparency is becoming a critical dimension of that. It's particularly important for younger demographics, which surveys consistently show preferred to bank with institutions that align with their values.

(23:31):

So this moral approach to AI can be a powerful brand differentiator. And finally, talent. The best AI engineers and data scientists want to work on problems that matter. Purpose-driven organizations attract and retain the talent that builds excellent systems, and that's a compounding advantage over time. So those are the incentives, good business case, expanded market opportunity, better risk management, better position from a regulatory perspective, increased investor confidence, improved brand trust, and talent access and retention.

Penny Crosman (24:05):

Are you doing any of the things that we've been talking about within Beneficial State Bank?

Surjit Chana (24:12):

Yes, Penny. I serve as the director and chair of the technology committee at Beneficial State Bank. We're a community bank that's focused on the West Coast of the United States. We have branches in California, Oregon, and Washington. So when I talk about the moral architecture for AI, this isn't a theoretical for me. It's a governance responsibility. In January, our board approved a comprehensive AI strategy for 2026 and 2029, designed to be exactly what I've been describing, an architecture, not a checklist. To be clear, BSB is still in the early stages of the deployment of a strategy, so I'm keen not to oversell or overpromise, but the strategy does align explicitly with the six pillars. For example, for the fairness and bias pillar, all of our AI initiatives must connect to the equity, inclusion, belonging mission pillar. Bias monitoring is built into the governance process before any tool goes live.

(25:11):

No AI capabilities deployed without ERM validation of risk tiering, explainability, data usage, and compliance alignment. In support of the transparency and explaining the ability pillar, our AI outputs must be understandable and defensible to management, auditors, and regulators. All client-facing AI must include a clear disclosure that interactions are system generated and supervised by bank staff. CFB guidance on automated decisioning is monitored very continuously. For pillar three, the data privacy and security pillar, all AI capabilities at beneficial state banks must adhere to internal data governance standards and bank privacy policies. Then the data sharing is limited to what is strictly necessary. The AI inventory tracks data sources and sensitivity classifications for every system. In terms of human oversight, which is pillar four, human oversight for us is non-negotiable. Every AI system has a designated business owner accountable for its performance, accuracy, and ethical impact. The board IT committee receives regular reporting on this.

(26:26):

Then for environmental responsibility, which is pillar five, the bank explicitly favors low energy cloud solutions and technology partners that support environmental stewardship. Sustainability is a required section in the use case review template. All deployments cannot contradict the bank's environmental mission. And finally, pillar six, which is workforce impact. For us, AI's enablement layer, not a replacement strategy. The bank invests in AI skills development for leaders and subject matter experts, creates department level AI champions and targets five to 10% product improvements through augmentation, not workforce reduction.

Penny Crosman (27:10):

All right, great. And on that last point, the bank has committed to not reducing its workforce due to AI deployments?

Surjit Chana (27:20):

The bank is committed to making sure all the workforce considerations are accounted for as we do the planning, and that skills and retraining are going to be available, Penny.

Penny Crosman (27:33):

All right. So what are the next steps for this moral architecture you're developing?

Surjit Chana (27:40):

First of all, Penny, I'm really grateful to American Banker for publishing the op-ed piece, which obviously got a lot of positive engagement. As I mentioned, I've started to do some validation, which I mentioned with the Global Alliance on Bank and Values community, and the feedback has been really possible, very positive here. I'm also doing individual bank board and leadership meetings to share the ideas and thoughts here. I am wrapping up a thought leadership white paper, which provides more depth to the architecture, two or three levels deeper than what you and I just discussed. My plan is to essentially open source the architecture. I have a software background, so this is very familiar to me. My biggest driver is desire to deploy the architecture and help address societal issues, really use technology for social good. I'm also making plans to share more of this with the alphabet super of industry organizations, such as the CDBA, the Community Development Bankers Association, ABA, American Bankers Association, ICBA.

(28:47):

I would love anybody from those organizations who are interested to reach out to me on LinkedIn, and I'll be happy to set up a briefing. I would also like to produce three case studies of banks that want to deploy the architecture. And if anybody's interested for their bank, please do reach out to me on LinkedIn.

Penny Crosman (29:07):

Okay, great. Well, Surjit Chana, thank you so much for joining us and all of you, thank you for listening to the American Banker Podcast. I produced this episode with audio production by Anna Mints, Adnan Khan, and Wenwyst Jeanmary. Special thanks this week to Surjit Chana at Beneficial State Bank. Rate us to review us and subscribe to our content at www.americanbanker.com/subscribe. For American Banker, I'm Penny Crosman, and thanks for listening.