AI, Deepfakes and Sophisticated Fraud at Scale: Why Developing a Multi-Layered Security Strategy is Mission-Critical

Digital finance affords financial institutions and their customers an enhanced experience, the ability to interact and conduct financial transactions across channels, and the real-time delivery of hyper-personalized new products. Yet the increasingly sophisticated exploitation of security vulnerabilities and gaps to commit fraud using generative artificial intelligence (GenAI) is set to unleash a tidal wave of fraud. The panel discusses the urgency for banks and financial institutions to develop a multi-layered security strategy to detect and prevent the rapidly accelerating rise of GenAI-powered fraud across channels—much of it occurring through extremely capable, global criminal and nation-state-run enterprises.

Transcription: 
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Penny Crosman (00:09):
Welcome back after lunch. We have a great panel topic: it's deepfakes and sophisticated fraud. We have Ruchira Ghosh, who is officially head of customer authentication at TD Bank, and we have Sara Seguin, who is principal advisor of fraud and identity risk at Alloy. Thank you both so much for coming. Do you each want to say a little bit about the kinds of fraud mitigation, detection, and defense work you're doing now and that you've done in some of your past jobs as well? Start with you, Sara.

Sara Seguin (00:49):
Yeah, sure, absolutely. My background came from the banking industry. I had about 17 or 18 years working for enterprise institutions, always in the fraud and identity space. Before I joined Alloy three years ago, I was running the enterprise fraud strategy for a top 25 bank and had a business analytics team as well. That covered everything from payments to cards to deposits to online. It was everything from a fraud strategy standpoint. Also, from a platform perspective, I moved into the space of what you need for platforms and fraud. I've definitely seen a shift in the past two decades from where we've been to where we're going—when everything was siloed and now pulling everything together. I think this conversation is at just the right time.

Ruchira Ghosh (01:48):
Yeah. To Sara's point, my background is very interesting. Think of those people who are jack of all trades and master of none, but the good part of that is it gives you a perspective on every seat at the table. Just alluding to that, I started with technology on the payment side. Then I moved to capital markets trading and fixed income as a technologist. Then I moved to AML, KYC, and fraud with data analytics. Finally, my latest gig is customer authentication and digital identity. That whole realm of roles I've played in my 20-plus years of career gives me an immense appreciation of what each role takes to combat fraud—starting as a technologist, then as an operations person, then as a product owner, and finally as a security individual.

(02:41):
To the question that was asked, fraud is evolving. I think we need to evolve in each of our roles to combat fraud in an end-to-end holistic approach.

Penny Crosman (02:52):
So what are some of the worst or scariest examples that you've seen, heard of, or read about regarding deepfakes and sophisticated fraud?

Ruchira Ghosh (03:02):
It's mind-blowing sitting on the fraud side and the customer authentication side, because you have trust and fear as two aspects of the spectrum. You want to trust as humans, but then you're scared when too much information is thrown at you. The scariest part I see most often in my role is around scams and deepfakes. We know about how elderly individuals are targeted with romance scams; these have always been here. But when you see elderly folks getting scammed because they are trusting their loved ones, and then you have deepfake technology pretending to be that loved one—that story is so powerful. Both as a human and a technologist, it shakes you up. It makes me feel I need to be stronger in combating fraud to safeguard my customer's assets.

(03:59):
Romance scams have always been here. It's about trust and relationships, but how far do you go in that trust? Those are the two areas where we see a lot of pressure testing of our controls. All of that is tied with money movement and payment rails—whether it's wires or any kind of payment around these scams that leverage those situations.

Sara Seguin (04:33):
As we think about the evolution, if we go back a few decades, you think of document forgery where people used nail polish remover. There are things happening today where you can't look at an ID and tell the difference anymore. If you used to work in a branch, you knew when you got a bad ID and you'd know the difference. We've evolved in such a way, especially through AI, that the vectors and attacks are much harder to detect because they're much more accurate now.

Penny Crosman (05:25):
Just to follow up on that, what are some of the best ways to determine that a potential new customer onboarding through the mobile app or online banking is real and legitimate—not a criminal, scammer, bot, or a synthetic ID? What are some the hurdles you put them through to make sure they're legit?

Sara Seguin (05:49):
If I go back 20 years, maybe you used one fraud screening tool and that would be good enough to detect identity theft. We're not in that world anymore. What we see more institutions moving to is a multi-layered approach. You write out multiple categories: what are we using for behavioral, for device, for fraud screening? Do we need more than one? What are we using for synthetic? You have this whole list of categories ensuring they are sitting not just at onboarding, but after they're in. The goal is to keep them out from the very beginning, but then you need layers to continue figuring out if any bad ones got in.

Ruchira Ghosh (06:47):

I'd like to add to what Sara said. In the identity space, one of the biggest challenges is that trust is no longer something you can touch or see; it's something you have to constantly validate with evidence. As the digital landscape evolves, how you shift your parameters of measuring trust is key. We call it "defense in depth" or a multi-layered strategy. In our world, we call it "building a fortress" where if one control fails, the second kicks in, and so on. The first and most important is customer education and the onboarding journey.

(07:40):
You can educate customers, but the interpretation may change when onboarding happens. Identifying and verifying the customer is your next level. Then we top it up with agents: how do you distinguish between a human and an agent coming in? We validate identity around policy, governance, detection, and response. Once you cross that phase, the next is contextual data. How do you tie the journey of that individual or machine ID in your network? To Sara's point, are you tracking geolocation, behavioral characteristics, or seeing anomalies in where they log in from? Is there a natural difference in how they used to act historically versus what they do now?

(08:35):
After that, how do you build controls for early anomaly detection? Once you figure that out, the next step is response. Do I step them up? Do I ask for multifactor authentication? Do I ask them to wave their hand or provide a government ID for document verification? These layers of defense are embedded in the journey to serve the customer with less friction, but eventually respond fast if an incident happens—stopping money movement and revoking access. Layers of defense are what will help you combat evolving technology.

Penny Crosman (09:32):
That makes sense. Another point at which you have to do all this work is at login. We recently spoke to someone at JPMorgan Chase who identified a series of robberies where people were grabbing phones out of people's hands in bars, using Face ID to log in, and then Venmoing money to themselves. Chase added another layer of authentication on top of Face ID: an actual Apple password. Apparently, 70% of people signed up for it. What else can be done at the point of login to tell whether a person is legit or should be blocked?

Sara Seguin (10:49):
From a login perspective, you have your credentials, but it goes back to whether your institution has the device and behavioral data to use as a baseline. Once someone tries to log in, they might have the username and password—which happens in account takeovers—but you can tell if it's a bot. You have lines of defense to detect a bot or know something doesn't seem right even after they get in. It doesn't necessarily have to be an immediate lockdown; it could be a reduction in entitlements. There are different levers to pull instead of having the same static behavior for everyone. We can take the baseline of what we know is good for that client, and when anomalies arise—like sending a wire for the first time—you insert friction before that payment is sent.

Ruchira Ghosh (12:17):
I agree. We teach our kids, "when you see something, say something," and that applies to us as adults in our day-to-day interactions. In the authentication space, it's about immediate anomaly detection and then enforcing a step-up or creating a friction that feels normal. For example, if you see a transfer above $5,000, you could require a liveness check to ensure the person is who they say they are by having them turn their head. When sending money, you can require both the beneficiary and the sender to authorize the payment. These are examples of introducing small friction in the customer journey. Businesses sometimes resist this, but it comes back to education and storytelling. Are you willing to risk your dollars and at what cost? Also, the introduction of passkeys—going passwordless and tying authentication to a bound device—is important. We want to move away from things like OTP (one-time passwords), which are very phishable today. Enhanced biometrics are much harder to phish.

Penny Crosman (14:33):
Ruchira, you were talking about scams like elder and romance scams where the login and onboarding are fine, but the customer is being tricked. What are some ways of tackling those cases?

Ruchira Ghosh (15:03):
Once a person has verified themselves, it becomes interesting to put friction in the system. I'm trusting you, but I'm actually not trusting you. That is where you insert technology at the right point. For example, on payment rails, you put deepfake detection tools and document verification. If a high-net-worth client is getting money in, the controls might be less strict, but when money is going out, you have to be very strong in your multifactor authentication to step up that journey.

(16:07):
At TD, we are leveraging AI and threat intelligence for anomaly detection. We are mapping that with a human element of oversight because tools are only as powerful as what you tell them to do. Having a human review it helps address bias, which plays an important role from a regulatory lens in banking. It's the collective introduction of friction and controls at the right time in the end-to-end journey.

Sara Seguin (16:55):
I completely agree. We've heard a lot today about good and bad data—the old "garbage in, garbage out." If you are not feeding enough data into your system, you aren't doing a great job of figuring out who your client is. You may know their direct deposits, their subscriptions, or where they eat when traveling. This is good for the customer experience, but it's great for fraud because the more you know, the better you can apply the right amount of friction.

Ruchira Ghosh (17:51):
To Sara's point, with a digital presence, we have a lot of your information. We are trying to correlate that information to understand your behavior and spend patterns to give you customized solutions. Customization is a sales proposition, but it's also for fraud detection. If someone who never spends at high-end boutiques suddenly starts, you have to ask if it's the actual person or an account takeover. Data plays two sides of the same spectrum: revenue generation and fraud detection.

Penny Crosman (18:51):
It's tricky, isn't it? With romance scams, people really think they're in a relationship and don't want you questioning it. With elder fraud, people are sometimes unwilling to recognize it or it's a family member taking money. It's delicate.

Ruchira Ghosh (19:19):
There's an emotional aspect which is very scary. I encourage families to talk about fraud at the dinner table like a normal conversation. If you aren't talking about it, it's very hard to crack. My own father-in-law in India got scammed, and I didn't even know because he felt embarrassed to share it for three months. It's important to normalize these conversations to know what's happening in your family's lives.

Penny Crosman (20:10):
Deepfakes keep getting better as the AI that creates them improves. What are some of the latest scary examples you're seeing, and how do you detect them?

Ruchira Ghosh (20:36):
The first example is employment. I recently realized that a fake resume came my way. We scheduled interviews, and then realized the resume was fake. Employment opportunities are being scammed by deepfakes where resumes are built to perfection using breached data. You pretend to be a person you're not to bypass screening tools. The second level is interviews. You see the same background, but the lips aren't syncing with the eyes or the voice. We have voice spoofing, face overlays, and proxy interviews. The US Department of Justice recently mentioned North Korean agents getting hired in companies to elicit funds. Imagine the impact on a company's operations and the general financial system.

Penny Crosman (22:29):
I've heard of banks actually hiring people who were AI, which is mind-boggling. Sara, anything to add on deepfake detection?

Sara Seguin (22:40):
I'll take a different angle. We know about voice and video scams, but there are peripheral events outside of banking happening to your clients too. There was a recent example in real estate where a realtor received messages and was asked on Zoom to move in different ways. They were trying to create a deepfake of the realtor. The end state was that their clients became victims because the deepfake told them to transfer money to a fraudster. If your realtor tells you to do it, you're going to do it to close on the property. People call the banks when this happens. It's important for banks to know this is happening beyond traditional romance scams.

Ruchira Ghosh (24:38):
Deepfaking the persona of a loved one to entice an elderly person to transfer money is a huge problem. It goes all the way from technology to a scam, then fraud, and then money movement. You have to find creative ways to combat it, like family passwords. It cannot be done just by a bank; it has to involve the individual as well.

Penny Crosman (25:31):
Do you think this puts more pressure on fraud information sharing among different entities?

Ruchira Ghosh (25:43):
Information sharing is tricky because you have to balance relevance against exposing your own vulnerabilities. If I tell you exactly how geolocation is tracked, I've given the secret sauce away to hackers. Information sharing has to happen within banks, and we do that through consortiums and regulatory bodies. However, privacy must be maintained. Our primary job is to innovate responsibly while safeguarding assets. It's a hard balancing act.

Penny Crosman (27:10):
Are agentic AI and generative AI making this work harder, Sara?

Sara Seguin (27:19):
Yes. Generative AI is being used by both fraudsters and financial institutions. There's been a lot of talk about fighting AI with AI. Bad actors are often ahead because they don't have regulations, they are well-funded, and they are well-organized. It started with bots on onboarding systems and DDoS attacks, leading up to today's agentic AI. It makes it harder, but we have to respond with purpose and intent to see a return.

Ruchira Ghosh (28:32):

In the identity space, agentic AI plays an interesting role. The interface is changing; in the past, you used a driver's license or credentials, but today machines are doing that on your behalf. Differentiating between a machine-driven agent and a human individual is a transformative area still being explored. It requires re-wiring enterprise strategies. Risk and threats are changing, so trust needs to be looked at in a measurable way. What scares me is what happens when these agents turn "rogue." If a good bot becomes a bad bot, who is liable? Is it the individual, the data, the prompt, or the agent? It's a very interesting, evolutionary space.

Penny Crosman (30:38):
More companies are interested in agentic commerce where an AI can find a product and pull money from your bank account for you. How are banks going to distinguish between good and bad bots?

Sara Seguin (31:18):
They will try to establish trust factors for certain bots, but at some point, those will be compromised too. It will be challenging, but there is technology that is very good at looking at patterns and anomalies to distinguish good from bad. Trust components will be key.

Ruchira Ghosh (32:10):
Business interactions and customer decisions will both be made by these agents. You have to treat agents like humans but continuously monitor and audit them with human governance. The last point is revocation: if an agent goes rogue, how fast can you revoke its access before damage occurs? That's where the value lies.

Penny Crosman (32:51):
We have a few minutes left for audience questions.

Audience Member 1 (33:02):
Thank you, this was great. I spent time at a threat intelligence company, and I feel this problem cannot be solved just from the banking side. You have to partner with tech companies because that's where romance scams start—on dating apps or WhatsApp. We also know many scams are run by organized compounds in Asia. How will banks ever solve this? It feels like treating the symptoms but not the cause.

Sara Seguin (34:30):
That's a fantastic question. Those scam compounds are global issues. To really solve it, it would have to be addressed at a high global level. Telcos and app companies also need more regulation so they can partner closer with financial organizations to stop this at the source.

Ruchira Ghosh (36:11):
We have to acknowledge that you will never fully "solve" the problem. What you do is build your fortress strong enough to combat it with the least impact. You have to stay relevant. Five years ago, few were talking about deepfakes. We have to continuously innovate with purpose. We tackle threats with attributes, signals, and data enrichment to reduce the problem.

Penny Crosman (37:18):
Ruchira, Sara, thank you so much for joining us. This was great.

Ruchira Ghosh (37:21):
Thank you guys.