Already a member? Current customers are kindly asked to reset their passwords. Simply select LOGIN, then RESET PASSWORD.
BankThink

Your data's just sitting there. Machine learning can change that.

Most financial institutions know it’s critical to manage the ever-increasing amounts of accessible data, but many miss the potential in using that data in innovative ways.

Financial institutions have a plethora of data they can access, either through their own systems or through public sources. However, many can’t — or won’t — exploit the large volumes of data, particularly the "owned" data that an organization holds about customers.

This kind of data is typically called customer relationship management data, such as the purchase history tied to app installs, email addresses and postal addresses. Though financial institutions maintain and collect massive volumes of data, many firms are restricted from fully using that data because they are required to comply with stringent regulations around what can and cannot be done with customer data.

Such major regulatory changes include the Dodd-Frank Act in the United States, Europe’s Markets in Financial Instruments Directive II and the General Data Protection Regulation — all of which affect banks with a global presence. These rules heighten the pressure on organizations to manage and secure sensitive nonpublic information and personally identifiable information.

At the same time, the data needs to be easily accessible to authorized users, irrespective of where the data is stored and in what applications. In turn, financial firms are grappling with how to extract value and insights from the data to improve service, while also managing it in compliance with global regulations.

Meanwhile, the volume of data inexorably increases. The challenge of deriving value from data is therefore, largely one of regulation, data diversity and volume. And more important, having a clear understanding of exactly what value a financial institution wants to extract.

The answer is found in the maturity of machine learning technologies that are now moving from experimental pilots to practical production for some large institutions.

Using machine learning to automatically infer and predict the confidentiality category of large volumes of banking documents also increases assurance that the compliance process is working systematically. Natural language processing coupled with machine learning means software can tirelessly search for, categorize and recategorize confidential and sensitive data across businesses and silos.

Data sensitivity is not a static quality. It varies with context — a single stock can be bought for many reasons — and changes constantly. Letting machines do the data hard labor increases the capacity of compliance professionals to focus on assessing risks. This reduces regulatory risk and improves performance over traditional manually driven, rules-based approaches.

Successful financial institutions use practical machine learning to turn this data into information that helps them differentiate customer experience and optimize operations. As machine learning leaves the lab, the performance gap will widen between financial institutions applying practical machine learning and those that do not advance.

For reprint and licensing requests for this article, click here.