Congress issued banks an implicit challenge last week when it ended the five-year controversy over how cybersecurity-threat information should be shared among companies and government agencies.
The issue had been clear: to what extent should companies protect customers' and employees' personal information as they compare notes about security incidents? Congress' answer was equally clear: not that much.
However, privacy experts still warn that the absence of consumer protections in the legislation — part of the omnibus spending bill approved last Friday on Capitol Hill — could backfire, making it easier for personally identifiable information to fall into the wrong hands while it circulates among private companies and federal authorities.
"The worst-case scenario is that this undermines data security and cybersecurity," said Robyn Greene, policy counsel at New America's Open Technology Institute. "Unless companies take upon themselves a higher burden than is required in the bill to remove personally identifiable information, a lot of PII can still be shared regardless of the fact it's totally unnecessary for increasing cybersecurity."
And that is where the challenge lies. It is up to banks and other private companies to do the right thing about data privacy: to try to protect it. It is a rare opportunity to help rebuild some of the public trust in bankers that was lost after the financial meltdown.
The Final Legislation
In the abstract, the sharing of information about threat indicators such as malware signatures could help companies react more quickly to cybercriminal attempts to break into their networks.
But where earlier versions of the cyberthreat-sharing bill had strong protections for consumer data built in, the final version of the Cybersecurity Sharing Information Act of 2015 eliminated or watered down many of those protections.
One clause stripped from the bill was a mandate to redact personal information that is not directly related to a cybersecurity event.
"They do have a requirement for companies to remove PII, but it's a dangerously weak requirement," Greene said. The standard for reviewing the data is low, she said.
"You run the risk of a cursory review that could miss some of the personally identifiable information that would have otherwise been identified and removed had that company engaged in reasonable efforts to review the data," she said. In the new bill, "there is no reasonable effort, there's no standard whatsoever. And the bill only requires the removal of data if you know at the time of sharing that it's not directly related to the threat. That creates a default position where companies can claim they don't have complete situational awareness and did not know whether or not the information was directly related to the threat. The default is to leave PII in the indicator, rather than remove it."
Also missing in the final act are provisions that would have prevented companies from directly sharing information with the National Security Administration. Privacy groups worry that the NSA and FBI could pull cybersecurity data feeds into their surveillance tools and mine it for their own purposes until they found actionable information.
That worry stems from the fact that "this program would have no judicial oversight," Greene said. "There wouldn't be that important check of Americans' civil liberties that courts usually serve in developing evidence in these situations."
The cybersecurity act no longer has restrictions on using gathered information for surveillance purposes. A requirement that the information be used only in a cybersecurity capacity was deleted. And where a prior bill had all breach data filtered through the Department of Homeland Security, the final version would allow free movement of cybersecurity data among all government agencies and storage at multiple government locations, several of which have been subject to data breaches in recent years.
Real-World Risks, Rewards
PII is coveted by malicious actors and hackers for various purposes, including identity theft and online-banking attacks. Sharing it among companies and government agencies creates more places where this information is stored and vulnerable to attack.
An example of how a consumer's personal data could be part of a threat indicator is if someone registered a website domain name that was being used to host malware.
"We might pull the information about who registered the domain, and depending on the nuances of the threat indicator, they might be an actual criminal or it may be a completely synthetic ID, which we see quite a bit of, or it might be a genuine person's information," said Sean Tierney, vice president of threat intelligence at IID, a threat intelligence platform provider. (Until November, Tierney was executive director of computer emergency response and cyberintelligence at Morgan Stanley.)"It could either be that they were involved in the suspicious activity or they may be a victim."
Most types of cyberthreat activity, such as malicious IP addresses and email addresses, do not include personally identifiable information, he said.