Trends in data governance have been disrupting private enterprise, especially in privacy-sensitive industries like financial services.
While new high-profile privacy regulations such as GDPR and most recently CCPA have provided consumers more protection about how their data is used, they have left enterprises at risk for steep fines if found in violation. These regulations, in addition to
In addition, the advent of the cloud, which moved companies' data out of their own ERP and client management systems to third-party providers like Salesforce and Google Analytics, have provided companies with data models from multiple systems governed by other companies. This shift has made it even more challenging to ensure that first-party data is being used in accordance with new laws. Now more than ever, companies need a safety valve.
Increasingly, regulations are in effect to protect our data from misuse because individuals understandably have preferences over how their data is collected, processed, and shared. But what about the proper use of data to improve the customer experience? If someone is a credit card customer who also has a checking account and a mortgage through the same financial institution, chances are that customer would want said institution to know that. But in many instances these data points are not being reconciled.
As financial services move away from siloed views to a more holistic view of a customer, they need to navigate these new challenges adeptly. They need to strike an important balance between increasing data value and business insights while still maintaining trust. At the same time, machine learning can help solve important problems and help mine valuable customer insights, but this approach often requires the use of sensitive, personal data. In many cases, data privacy and data science are at odds with one another — but it doesn’t need to be this way.
Privacy-enhancing technology can and should play an integral role in privacy protection for consumers and enterprises alike. For individuals, this may mean using voice assistants or genetic testing without the inhibitions we so often have today. Hospitals could use sensitive patient data for better treatments, earlier diagnosis, and ultimately prevention. And financial institutions could share proprietary data with their competitors to solve mutual problems and comply with federal law, such as money laundering and consumer fraud.
The good news is that there are a variety of data privacy techniques available for adoption. For example, federated learning —also known as collaborative learning —is a type of distributed machine learning that enables multiple data owners to build a common, robust machine learning model without sharing data. This provides companies with the opportunity to address critical issues such as data privacy, data security, data access rights and access to heterogeneous data. This type of machine learning is a good option for companies trying to learn on multiple distributed devices in a secure way (e.g., manufacturing, smart cities and IoT) as well as for business partnerships interested in training a machine learning model together without revealing the actual values held in their customer or proprietary data.
At the same time, it’s important to point out that privacy-enhancing technology should be widely accessible. Technology should be open source, fast, simple, and compatible with existing workflows and tools. Secure websites are no longer a “nice to have;” it’s what financial and other customers expect to trust that their data will not be compromised. By ensuring access and ease-of-use, financial service organizations can preserve trust while working smarter and more efficiently with available data.
While consumer trust levels for financial services are relatively high at