Among the top cybersecurity risks banks face are insider threats — people within an organization that either accidentally, negligently or maliciously expose the organization to harm.
Having overseen information security and cyber programs for Bank of America, David Reilly is intimately familiar with insider threats and measuring the risk they pose. He was chief information officer of the bank’s global banking and markets operations until October 2021, and before that was the company's chief technology officer.
Reilly is now a board member of companies including Ally Bank and Safe Security, a cybersecurity risk quantification firm.
Products like Safe Security are one way to quantify cybersecurity risk. The consulting firm Forrester
Reilly sat down with American Banker to overview the need for risk ratings, the importance of a consistent language for discussing cybersecurity risk measures, and what companies can do to mitigate those risks.
How much of a concern are insider risks in the financial services industry?
DAVID REILLY: I don't think it's peculiar to financial services. I think we need to think about this more broadly.
You have people inside of your network, like database administrators, with highly privileged access, which they need to do their jobs. But, if those credentials fall into the wrong hands, they can cause harm.
It's not that difficult today to discover who might be a system administrator or a database administrator at a company. We could go to LinkedIn right now and find those administrators explaining that's what they do. Now, you've got a first name and a last name. That’s not enough, but it's a start for you to then learn more about that person.
God forbid they have something going on in their personal lives that might make them susceptible to an approach or a threat. Pretty soon, you can gather enough information that, if you're a bad actor, you can go try and get a hold of their credentials.
Assuming that can happen pushes you down a very important and necessary road to protect yourself against that kind of insider threat and insider risk.
We shouldn't just limit our thinking here to employees; every large-scale enterprise uses partners, contractors, and third parties, some of whom also have that insider access that also needs to be managed extremely carefully.
How exactly do you quantify the risks that you face with respect to insider threats?
You've got to look at it through three lenses at the same time. First, you've got to think about the people risk, which we've talked about a little bit.
Second, is the technology risk — what access those individuals have to the different parts of your technology ecosystem. That’s the servers, the databases, the data repositories, the network itself and storage infrastructure.
Third is the processes that an individual has to exercise or can exercise to do their job. That's a pretty dry and seemingly uninteresting thing to look at, but I give it equal billing to the people risk and the technology risk because, if you can set up a server, or you can decommission a server, by default, you've got pretty elevated privileges.
Measuring across those three dimensions, and then combining the measurement is, I think, a key element of the very best CRQ platforms.
And for reference, CRQ stands for …
Cyber risk quantification.
Here’s one way to think about it: If you look at the financial performance of a company, there's a pretty standard way to assess that through the books and records that come out of the chief financial officer division, and everybody knows how to read those things.
We don't have that for cyber risk. One of the things we think CRQ will do is establish that kind of consistent vocabulary of cyber risk across different industries and within the same industry.
What standard financial reports do for a CFO to articulate financial risks inside and outside the company, CRQ is going to be able to do for cyber risk.
What are some of the best practices for banks to understand, measure and mitigate threats of insider threats?
Again, you've got to think about the person, the technology and the processes they can run, and there's a few things.
First of all, review your identity and access management measures. Consider who has access to what and ensure that access permissions are regularly reviewed.
There's a notion in the industry called use it or lose it. What that means is, after a period of time — oftentimes, it’s 90 days — if you haven't exercised an access grant that you've been given, reset that to zero access.
That's a good way of ensuring that people don't build up access permissions that they don't need. That will constantly ensure that your attack surface is as tight as it can be.
In addition, a lot of companies employ behavioral monitoring. They will watch what goes on inside of the network — what those different IDs and credentials have been used for — to spot anomalous patterns and regular patterns.
Sometimes an anomaly arises, but that may be because a system administrator had to go through a break-glass protocol, meaning they had to break normal access control procedures for a production incident. But, knowing that an anomalous pattern has occurred is going to be key so that you can do something about verifying it was not a bad actor.
When you're doing these assessments of the risks you face in these areas, it seems like you'd probably be bringing people in to help you do that. Is it something that you can do completely internally?
Larger, more sophisticated cyber teams would probably do this themselves. If you don't have access to those skills, there's a partner network that you can bring in to help you do that.
Ultimately, accountability always lands with the chief information security officer, chief risk officer and sometimes the chief information officer, depending on where the CISO reports into. It has to be owned inside the corporation, even if partners are used to help with the assessment and recommendations based on the assessment.
I guess what was really behind that question was the mentality piece of it. You don't want to lie to yourself about the risks that you face; you want to have an objective assessment, and a way of helping to do that would be bringing in a partner to point out whether you’re misleading yourself about the threats you face.
I think that is extremely important, the role of champion challenger. I'd go a little further.
Many companies will use two competing tools to assess the same risk. That deliberately overlapping tooling is giving me a second opinion on whether my protective measures are as strong as I need them to be, and as strong as I think they are.
That principle of trust but verify — of using a challenger model — is extremely helpful, particularly in this space, to ensure that you don't lull yourself into a false sense of security.