Don't Let Facebook Likes Sway Credit Decisions

Register now

What if you had to choose between keeping your Facebook friends and getting a home equity line of credit? What if your student loans could not be refinanced until you cut certain relatives out of your digital life?

Such decision-making scenarios could happen as fintech companies increasingly crunch alternative data to help determine a would-be borrower's creditworthiness.

As discussed in a recent American Banker article, some marketplace lenders are ditching FICO scores for data-driven business models that analyze behavioral as well as social information from a variety of online sources to determine applicants' creditworthiness. Kabbage, Social Finance (SoFi) and InVenture are among the companies that use information such as purchasing habits, employment records and even mobile-texting patterns to determine creditworthiness. Companies like Lenddo, Kreditech and Hello Soda, meanwhile, are designed around the notion that data about one's social circles holds a financially predictive promise; therefore, they incorporate the data in their credit models.

While not everyone is aware of this emerging scoring mechanism, social credit has been growing in recent years. In some respects, little about this is new. Lenders have always considered reputation as an indicator of creditworthiness. This is just the digital equivalent on steroids. Would-be creditors are collecting and analyzing vast amounts of data to generate the FICO alternatives. Even though we are moving to a world of big data and social networks and away from a world of FICO scores, the same basic variables are in play with an upside. As we argued elsewhere, credit based on behavioral as well as social information is a welcome development for underserved populations traditionally passed over by mainstream financial institutions. For example, minorities might have more difficulty securing bank loans because of their thin credit histories, while credit based on behavioral patterns or social information could help the underserved secure loans. Wonderful, right?

Unfortunately, such credit scoring methods — especially the ones focused on social information — also have a dark side. As an exclusive source to gauge creditworthiness, these credit scoring methods' effectiveness is questionable. They involve collecting and analyzing information about people without them consenting to or understanding the terms. Often, they may not even know that it is happening.

Even though innovative underwriting criteria can help more people get credit, the risks and potential harms of mining social media data might just be too great. The Federal Trade Commission recently observed, for example, that such models could penalize those who avoid social networks simply because lenders would have less information about them. Less information equals more risk for lenders, and greater risk entails a worse financial deal for all the consumers who don't use Facebook.

Further, credit scoring methods based on social information use algorithmic decision-making, which shelters the underwriting process under a veil of secrecy and makes it hard to monitor or criticize.

Most importantly, and perhaps most frighteningly, credit scoring mechanisms that are based on social information incentivize financially responsible people to perfect their online images. Artificial acts of online fine-tuning include consumers cleaning up posts about how "wasted" they were last night or tweets about how upset they are for getting laid off. It also includes deleting all records of being affiliated with "bad" acquaintances.

The alternative credit analyzing algorithms do not care if you used to volunteer at a poor community in the past or lived in proximity to a "bad" neighborhood or if a recently bankrupt social network contact was your best friend in preschool. On the surface, such affiliations could be considered financially harming, and consequently, force people to choose between social ties and better credit.

Because of these risks, it would be wise for regulators to limit the ability to use certain types of data for scoring purposes. Such an approach wouldn't be a first, either. Take medical information, for instance. While an individual's terminal illness could significantly influence one's ability to repay a loan, laws restrict the use of specific medical information for credit scoring purposes. A similar view should guide our response to credit scoring methods based on social information.

Even though a social credit model based on "tell me who your friends are and I'll tell you who you are" may prove accurate for scoring purposes, its effectiveness cannot stand by itself to justify the privacy and social harms it generates.

Nizan Geslevich Packin is an assistant professor of law at the Zicklin School of Business, Baruch College, City University of New York. Yafit Lev-Aretz is a research fellow at the Information Law Institute at NYU.

For reprint and licensing requests for this article, click here.