Fireside Chat: Interview with Brian Minick, Chief Technology and Information Security Officer, Fifth Third Bank

Gain strategic insights and practical tactics for optimizing your banking environment to effectively combat the threats posed by cyber risks and losses resulting from increasingly sophisticated attacks and fraud.


Transcription:

Penny Crosman (00:10):

Hello everyone. It is my pleasure to welcome our guest speaker for this next session. Brian Minnick is Chief Technology and Information Security Officer at Fifth Third. Please welcome Brian. Brian has many years of experience in defense, FinTech, and banking. In his current job, he oversees all security at Fifth Third. My first question to you is the most obvious: what keeps you up at night from all the cybersecurity and fraud threats out there?

Brian Minick (00:52):

I've never gotten that question before. We think about threats in really different categories. So you have nation-state intelligence organizations, which you can think of as the apex predators, pulling off the most advanced and sophisticated types of attacks. I started my career in the defense sector, defending military information, so I dealt with a lot of those types of attackers. From a financial sector perspective, there's a little of that, but most of what we're dealing with is organized crime trying to steal money. It's all about financial motivations. In terms of capabilities and sophistication, they are less advanced than a nation-state apparatus, but still a very concerning and capable attacker. So those are the things we are mostly dealing with. What keeps me up at night is not just protecting the bank itself, but also protecting the customers, because a lot of these folks are not necessarily targeting the institution and trying to break into the bank, but rather trying to target and exploit customers.

Penny Crosman (02:23):

There are a few things to unpack there, but where do you get your information about the most severe threats out there, and how do you share that information with others? Is there anything that's really working on that front?

Brian Minick (02:37):

Yeah, there are a couple of tiers of information sharing. There are a lot of groups that do threat intelligence and information sharing about who attackers are, what they're doing, and how they're doing their attacks. In those larger groups, the information you get isn't always as good. The really great information is shared person to person, analyst to analyst. I can give you an example of how this played out in the defense sector. We had a nation-state attacker who was trying to get information about military projects we were working on with a number of defense contractors. We had a way of detecting this attacker and were using it effectively for over a year. Anytime they tried to attack us, the method we developed was effective at catching it.

(03:44):

Over the long term, it was effective. We had a session with, I think it was seven or eight defense contractors in a classified space, a "skiff." So no bugs, no listening devices, no phones; you could only take paper in and out. We shared how we were tracking this group. Less than a week after that meeting, the attacker changed their pattern, and our detection method was no longer effective. That told us that someone in that room went back to their company and lost control of that information. The attacker got their hands on it and realized, "Oh, this is how they've been catching us," and changed their method. So, that type of very sensitive, very effective information we tend to keep within a trusted group. Things that are less long-term effective, maybe they work today but we know won't last forever, we'll share more broadly among the group. So, yeah, there are different tiers.

Penny Crosman (04:53):

It's fascinating that a small group meeting in this super controlled environment can still get leaked. What about the warnings the FBI put out over the weekend about people getting smished, or scammed, over text messages? The messages said you had unpaid toll collection fees. I've gotten several of those over the past year, so it wasn't really news to me, but how effective are those warnings from the government?

Brian Minick (05:25):

It's interesting because you have to consider who the audience is for those warnings. They post those pretty broadly, and they're generally available on the internet, so they're targeting a very broad audience, consumers, and everyone tied to that. I think they are effective from that perspective. But when you think about what goes into the creation of those, there was a group of organized criminals that my organization was tracking. We had some information on them and were sharing it with the FBI, and it actually did result in one of those notifications, but it took several months for the information we provided to make its way into one of those. It was a lot of processes inside the bureau itself that go into creating those products. So I don't know that they're the most timely thing. If I'm an analyst on the front lines trying to identify some of these things, it's not always the latest information. But if I am the average person who gets a text message about a missed toll payment, yes, it's extremely effective. So again, back to that tiering idea, when you share information, you want to make sure you're targeting those "intelligence products" for the right audience and you tailor that information based on what good you're trying to do and what benefit you're trying to derive.

Penny Crosman (07:03):

That makes sense. As you've been saying, I've heard of a lot of banks having these very small groups that meet over the phone or over Zoom and just share confidential information. But as you've pointed out, it can still seep out. When you say that you were tracking this group of hackers, can you say anything about how you were doing that? What technology did you use?

Brian Minick (07:25):

You want my secrets right here?

Penny Crosman (07:27):

That's what you're here for.

Brian Minick (07:29):

It really depends. I think the key is to have as broad of a toolkit for detecting attackers as you can possibly have. We think about buying some of the best industry products to knock down the noise, and that'll catch 95% or more of all the attacks you face. Then, we use proprietary capabilities with our own threat intelligence to catch that remaining 5%. When you think about the top attackers, many of them have bought a lot of this technology and are testing their attacks against those things to know what works and what they can slide by. So we find that layered approach is effective, where my team is able to focus on the very top of the attack pyramid, if you will. We try to have a full toolkit of detection capabilities that are dynamic, like a morphing defensive posture that we can change and adjust as quickly, if not more quickly, than the attackers can adjust their attacks. So that's part of our strategy. Yes, there's a lot of great technology out there, and we want to use that to catch as much as we can, but we realize that there is a category of attackers who are able to evade that, and that's where our "secret sauce" comes into play.

Penny Crosman (09:02):

I feel like a lot of the cybersecurity attacks that we've been writing about over the last couple of years have been through a third-party vendor, like a core provider or a digital banking provider. Sometimes that provider uses software like MoveIt or other file transfer software that hackers can find a vulnerability in, get in, and then once they're in, they're able to get customer data and so forth. Do you see that as an elevated risk, and how do you manage that? I mean, a bank like Fifth Third must have thousands of software programs and vendors that you work with.

(09:41):

How do you get your arms around that?

Brian Minick (09:43):

Third-party risk is a challenge, and you see it with examples like MoveIt, which was a big one for our industry, and SolarWinds is another classic example of third-party supply chain risk. So it is a big challenge, and I think a multi-pronged approach is required. One is from a third-party risk perspective: vetting your third parties, understanding what their cybersecurity programs and controls look like, and understanding their processes for maintaining their environments. There are a lot of new technologies coming to bear, and a lot of these startups just don't have the capacity or resources to do some of the cyber protections that you would need. From a financial sector perspective, the bar is much higher for what we would require than a lot of other industries.

(10:45):

Before coming to the bank, I actually ran a cybersecurity startup. We founded and started it, and there were specifically certain sectors that we did not want to sell into because we knew as a startup the expectations and some of the things we would have to do, and the cost of that sale was much higher. So I think that's part of the challenge we have as we talk to and try to work with third parties. If they have not sold into or have developed their product specifically for this sector, there are probably a lot of gaps in what they need to do. We're not the 30,000-pound gorilla in the room. A lot of these technology companies have many customers and a large addressable market, and like I did when I was in their shoes, I said, "This isn't quite the industry that I want to sell into. It's not the market I want to go to."

Penny Crosman (11:43):

Right. It's tough. It's tough.

Brian Minick (11:44):

So we don't necessarily have the scale, at least a number of the financial institutions don't, to push vendors to do some of the things that we need them to do. So I think as an industry, there's an opportunity for us to come together and start trying to drive that together with some of these suppliers and third parties. Beyond that, it is back to that toolkit and the layered defenses. If we do bring a product in, we need to make sure we have the right visibility into what it's doing and that it's plugged into our overall detection capabilities. That way, if something is coming in through that channel, we are able to identify it, respond to it quickly, and deal with it.

Penny Crosman (12:28):

Then, at the other end of the spectrum, you have the big hyperscalers, and a lot of banks are moving more and more of their applications into a cloud at AWS, Google, Microsoft, or somebody like that. So the responsibility for security is being shifted toward these very large companies, especially with AI and more people adopting a small number of foundation models. How do you look at that shift in risk, and do you think there's some concentration risk building? How do you think about that and deal with it?

Brian Minick (13:05):

That's a good question. I like how you put it: it's a shift of risk. There is no silver bullet here that eliminates all risk. Some things from a security perspective are easier when you go to the cloud. I think of encryption, for example. When you're in a cloud environment, you just have to click the encrypt button, and it takes care of it. My job as a security leader is to make sure that button is clicked and to manage the configuration of it. When we first started doing encryption in our data centers, it was, "I've got to buy an encryption package, I've got to install that, manage the keys to it, make sure it's properly encrypting all the data, and if something breaks and it's not doing it, I have to fix that." It was a lot harder to do those types of things.

(13:56):

Cloud does make that easier. Rather than running and managing software packages and making sure they're installed correctly, I'm doing more of a configuration management approach and making sure everything's configured appropriately. So that is easier, but to the point you raised, there is now concentration risk tied to that. Some of these hyperscalers become more of a target for people who are trying to achieve whatever means they're trying to achieve. We think about some of the big password vault providers that have been compromised because they contain the passwords for so many people. It made them a good target. I think similarly with hyperscalers, it's not a one-size-fits-all silver bullet that solves all your problems. It does make it a lot easier, but it also introduces new challenges. At the end of the day, we've transitioned from what we articulated as a cloud-first strategy, which was, "Hey, we think this is the best place to host and run our applications," to more of a cloud-smart strategy. Cloud is one option among many in terms of where we can run things. We need to understand the pros and cons of all those options, whether it's running it within a hyperscaler, a vendor's SaaS version, or even running it on-prem. We make an intelligent business decision around where we should put these things based on the specific business case.

Penny Crosman (15:39):

One interesting data breach fairly recently, I think, was Coinbase. My colleague, Carter Poppy, wrote about it. They were compromised by hackers, or fraudsters, who bribed customer service agents in India to provide access to customer data. To me, that raises interesting questions about how you prevent your employees from colluding, especially if you're using an outsourcer and your connection is not very direct to them. We've seen this with some of the money laundering cases, where employees were willing to open accounts for money launderers in exchange for a gift card or that kind of thing. How do you create an environment where people are not likely to do that?

Brian Minick (16:31):

Right. And let's face it, that situation is getting harder from a protection perspective. You've got things like the Ray-Ban glasses that have cameras in them, right? So a fraudster may come to a person and say, "I don't need you to do anything. Just wear these glasses." You're not recording, you're not writing something down, you're not stealing. Just wear the glasses while you're helping customers. So it gets harder for us from a prevention and detection perspective because that situation, how am I going to catch that, right? It's not necessarily going through my network. So I think the key to that is all about communicating awareness and educating folks within the organization. The Coinbase example is great. How did that work out for the folks who were in the call center at Coinbase? What happened to them? I don't think that was a good situation for them. We need to let folks know, "Hey, here is how to report this happening and how to let us know, and then we can help you through that situation."

(17:49):

We also let folks know, in the right way, that if you participate in this type of thing, there are consequences, and it generally doesn't work out well for you. You're the dispensable one in this situation from an attacker's perspective. They're not here to take care of you in this transaction. And then we tie that back to, "When this happens, here's how you let us know and how we can work through it together." So I think that combination of education—both the pros and cons of the situation, and how to deal with it—is key for us. And also vigilance. We try hard through our education programs to get every employee within the bank to be a member of the information security organization. "You see something, say something, tell us about it." We've had people who have clicked on links they shouldn't have clicked on, who have gotten phone calls and interacted with an attacker in a way that they shouldn't have. After the phone call, they hung up and didn't feel right about it. They thought, "Wait a minute, that was weird. I don't think that was right," and they told us. We rewarded them. It wasn't, "Who did this? They're not going to work here." It was, "Who did this? Thank you for letting me know that that happened." So I think creating that culture where you're encouraging people to let you know what they're seeing and what's going on, and providing that information so they're not scared to say something, even if they did something wrong, is also key. We do a lot and talk about how we can make employees and everyone within the institution a member of the information security team so that when they see something, they can tell us and we can spot it.

Penny Crosman (19:47):

So one school of thought lately, or it's been going on for a while, is that companies like banks should be more proactive. They should do a lot of penetration testing and have bug bounty programs where they're paying people to find flaws in their software. How do you feel about that kind of proactive stance? Is that something you try to do?

Brian Minick (20:10):

Absolutely. Anything we can do to find weaknesses in our systems and our code, we welcome. Anything legal that we can do to find these things is welcome. We do have bug bounty programs, we have penetration tests, and we have people within the organization who are constantly trying to, what we call, "red team," trying to break in and work through our defenses in ways that we haven't thought through. We've had people going to ATMs and trying to pull things off while wearing cool hacker shirts because they know the camera is going to get them and they'll have a letter in their pocket in case it goes wrong and law enforcement gets involved. But anything that we can do to encourage that in a responsible way, to work with us to find gaps and issues, is fantastic. I would rather those processes and capabilities find something in our environment than an attacker find that and we have to work through that problem.

Penny Crosman (21:23):

Another thing that I think has been affecting a lot of banks is that these fraudsters will create fake bank sites that are so realistic that when somebody types "Fifth Third" into a search engine,

(21:35):

they'll see those sites before or alongside the real site. What can a bank like yours do about that?

Brian Minick (21:42):

Yeah, that has been a challenge that the industry has been facing for a while now. We've been telling our customers to bookmark our site and not go to Google and click the very first thing in the list, because what we're seeing is that attackers will actually buy ad space associated with banks. So if a customer goes out to Google and searches for "bank XYZ," the attacker can buy an ad for "bank XYZ." A lot of times, that first result will be the attacker's ad, not the bank's actual site. The ad is designed to look like the bank. Yes, it has a little "sponsored" label applied to it, but it looks just like the bank's result, and it even shows the URL that you're going to. In many cases, those URLs have absolutely nothing to do with the institution you're looking at.

(22:45):

We've seen things like griswoldrealtyadvisors.com looking like a bank login site, and you're like, who goes to Griswold and goes, "Yeah, that's my bank. Let me give you my user ID and password?" They don't even realize it's because they searched for it, saw that first result, trusted Google to give them the most relevant result, clicked it, and put in their information. And that's been a challenge for the industry as a whole; it's rampant. From a financial institution perspective, we deal with these things by getting them taken down, and one of the quickest ways to deal with it is through trademark infringement. If you talk to Google initially, their thought is, "Well, this is a customer of ours who bought an ad. We can't share who that is." But if we come in and say, "That's our trademark, this customer of yours is infringing," you get action a lot quicker. So your question around what institutions can do to protect themselves? Just watching what's out there, being able to respond, and having the right partnerships within the industry to get action taken quickly, whether that is taking down of illegitimate websites or ads. Building out those relationships and getting that ability to see, "Hey, a site that just went up that looks very much like my login site but isn't," and then being able to deal with that.

Penny Crosman (24:36):

You would think Google would notice that when they're taking the ad payment, but...

Brian Minick (24:42):

There's a volume situation there.

Penny Crosman (24:44):

All the things happening there. Exactly. And I think in general, there's also been this movement toward trying to fool customers not just with a fake website, but with text messages, with emails, and they're often quite effective and result in things like Zelle fraud or other kinds of fraud where people get scammed out of their money. And increasingly through deepfakes and AI. How much do you worry about the threat of deepfakes, whether it's a call from someone who sounds like a loved one or even video or otherwise?

Brian Minick (25:28):

We think about it in two ways. One is from an organizational perspective—deepfakes used against employees. Some of you may be familiar with the MGM breach and ransomware situation a little while ago that was precipitated by an attacker calling the help desk and getting a password reset for an employee. So we've thought through some of those scenarios and have actually implemented some internal processes, particularly around sensitive transactions, like a password reset or an account unlock. We'll go "out of band" of that conversation and say, "Okay, we're going to contact this person's manager and have them verify that this is actually an employee, that they do have this situation, and they need help with it."

(26:37):

So we're trying to think of ways to verify situations to make sure that it is what it is within our employee base and with some of the transactions and help desk calls that take place. Same thing with money movement. The CEO sends you an email and says, "Hey, I need you to go move money from here to here." There was one from a deepfake perspective where a CEO of a company was on a Zoom call, and it was a link to a call. It was the image of the CEO in their office, but it just didn't feel quite right, and it was actually an AI-generated avatar. So we're asking, how can we build processes that will account for that and verify those things?

(27:33):

The other side is from a customer perspective, as you mentioned, and how do we protect customers? A lot of that at this point is around education and ways that we can start helping our customers understand, "Look, this is how we're going to communicate to you. These are the mechanisms, whether that is through our mobile application, internet banking, or whatever, and this is what that looks like." So it really also drives us to be more disciplined in the mechanisms we use and how we intentionally start interacting with our customer base. Then, we train the customer base and educate them on how that works.

Penny Crosman (28:21):

So I have a follow-up, and then we will go to questions from the audience. With that example you gave of asking for someone else to verify a password change, it seems to me that's adding quite a bit of friction to a request that used to be easy to handle. So how do you balance that need to make it a little bit harder and more secure, but not make it frustrating for legitimate customers who just want to be able to do their banking?

Brian Minick (28:51):

It's funny you use that example because what we're trying to do is really find the win-win. It's not in every situation where security and convenience are opposing each other and you have to pick one or the other. We are trying to find that win-win where we can bring those two together. Think about what a lot of folks have done with biometric authentication. That has improved the situation from a security perspective and also made it more convenient for customers to access and do things. So it takes creativity and a lot of thought to try to find that win-win in those situations. And that's really the goal and the challenge that I have for my team. As you think through some of these pieces, how can we either make the security piece as transparent as possible or try to find that win-win? Unfortunately, the example you grabbed onto was not one where we were able to do that, and then it's just a matter of, "Hey, what is the risk that we're trying to reduce?" And then, how do I use that as a training and educational opportunity for the employee base? We're not doing this to be annoying. Your information security team did not wake up this morning and go, "Man, how can I make it harder for them to do their jobs today?"

(30:17):

There is a reason behind that. Historically, within organizations, people haven't wanted to talk about attacks and what they're seeing, but I think that's a problem as well. Letting people know what you're seeing, letting them understand, "Hey, this is why we don't have nice things all the time," and using that to really highlight back to making every employee a member of the information security team. You are on the front lines, folks. Just the fact that you work here makes you a target for people who are trying to do nefarious things within the institution. So using those "war stories," if you will, to help educate the broad employee base and let them know—not to scare them, but to really educate and say, "Hey, this is what's happening.

(31:15):

This is what we're doing about it, and this is the part you play in that, and how you can help us." I think one of the really amazing things about information security is the way that it can bring people together, focused on a common adversary. There are bad people trying to do bad things, and we can all focus on being the good people trying to stop evil. A large vendor within the information security space for a long time had the slogan, "Stop evil." Great, we can all rally around that. So I think telling those stories helps really rally people and create a sense of, "We're in this together," and that's really the culture and the feeling you want within the organization.

Penny Crosman (32:08):

That makes sense. All right. Any questions from the audience? Got one here. Thanks, Megan.

Audience Member 1 (32:24):

Thank you. Hi. Just a question or a thought on human trafficking and how payments interface with human trafficking and how related it is to the fraud and criminal activity going on at the moment. Are you guys as a bank doing anything specific around detecting that or assisting law enforcement agencies to manage it better?

Brian Minick (32:48):

Yeah, there are obviously Bank Secrecy Act pieces that come into play there, like anti-money laundering and sanctions lists. We are watching for a variety of criminal activities within the transactions. A lot of those do have legal requirements for us from a reporting perspective. When you see certain types of activity, it is required by law—not just regulation, but actual law—to report those within a certain period of time and then work with law enforcement based on what we're seeing. So yeah, through a lot of those capabilities and programs, not just the trafficking piece, but a lot of the illicit activity tends to get caught up in some of those blankets and nets that we're trying to adhere to.

Penny Crosman (33:42):

And there are some groups, like one called "The Noble," that do some really interesting work in this area, especially around big sporting events. They'll work with a group of banks to try to identify transactions that seem sketchy. Any other questions? Got one here. Thank you.

Audience Member 2 (34:07):

From a proactive threat detection perspective, you talked about this earlier. How do you keep your personnel refreshed so that you're constantly getting diverse ideas and strategies into your ethical hacking and red teams as you combat the bad actors that are out there?

Brian Minick (34:24):

Yeah, so that's a great question. It's a challenge from an industry perspective: how do you keep the edge sharp as you're doing that? I think there are a couple of different aspects to that. One is from an analyst perspective, with the information coming in. We've got teams of people who are looking at alerts day in and day out, and there's a thing called alert fatigue as they keep looking at these things and going, "Yeah, false positive, nothing there, nothing there." And they tend to get into that rhythm. We actually rotate people around in different areas. We are intentional; we call it "opening the aperture" for how much noise we let into the system and are able to dynamically adjust that so that some things we may say, "Hey, this particular detection is going to be pretty noisy, but when it hits, it's going to be a big, major thing."

(35:29):

Whereas this one isn't very noisy; every time it goes off, it's probably bad. So we're trying to tailor that and make sure our people know and have that context as they go into it. The other side of it, though, is maybe where you're going, which is, how do you stay on top of what the attackers are doing and the latest pieces there? I think that one is just the relationships that the people on the team have. We're very much encouraging our folks to be involved in the industry and multiple industries, sharing information. I mentioned that tiering of information sharing. I don't want my team just gathering information off of large portals where a bunch of stuff is shared. I want them out there developing relationships, analyst to analyst, where if someone else sees something new and novel, they're going to pick up the phone and call my team and tell them, and vice versa. You get into those relationships by sharing the information you have, and hopefully you're able to create new intelligence that is valuable to people and use that to create relationships out there in the industry. So we look at intelligence as something to be used, one, to defend ourselves, but also to build those relationships and help defend the broader industry. So getting plugged into that is key, and not just within the financial sector, but even across other industries, I think, is key for staying sharp, if you will.

Penny Crosman (37:00):

Yeah, interesting question. Thank you. Well, we're just about out of time. We are going to have lunch in, have innovation lunches next, but I want to thank Brian Minnick so much. That was really interesting. Appreciate it. Thanks for coming.