With the advent of new lending strategies, analytic capabilities and big data, the way lenders manage the entire credit lifecycle has evolved.

Zoot Enterprises recently discussed advancements in collections methodologies with Keith Shields, senior vice president of Portfolio Management at Loan Science and chief analytics officer at Magnify Analytic Solutions.

Shields discussed the creation of effective collections infrastructure, innovations in the field and the major factors driving change in collections strategies.

What are the most effective methodologies in creating a collections infrastructure today?

Keith Shields:  There are three key elements: data retention, data integration and enabling data-driven insights (which often comes from predictive analytics). It starts with data retention.

Organizations with great collections infrastructure understand that they need to store all data forever (more or less). They don’t fall victim to the short-sightedness of data retention periods which call for data to be discarded after a certain amount of time because of the disk space it consumes. Storing data is extremely important because you don’t know what you are going to find until you keep it and mine it.

Collections infrastructure exists to deliver information and data-driven insights in real-time to collectors.  To accomplish this, the operational data stores with which collectors interface must have real-time access to the historical data retained by the data warehouse, and to the analytic insights that are derived from that data.

An effective collections infrastructure is also dependent on good data integration. Data retention means storing all the rows of transactional information about “Mr. Smith” (that don’t mean anything before being integrated). Data integration makes sense of all that information.  It puts everything you know about a person in a summarized, aggregated view of the customer in the form of attributes. 

For example, the number of address changes Mr. Smith has had in the last two years or the number of payments he made over $100 in the last 6 months tell us something about Mr. Smith’s future behavior. This information feeds predictive models that provide insight.

 

The third key element in a collections infrastructure is making data-driven insight available to operations in a flexible and facile manner. This is driven by business rules engines (BRE’s) that allow models, strategies, optimizations and advanced analytics to be applied in a flexible and self-documenting manner.  

Back to our “Mr. Smith” example, we can put all the known information about Mr. Smith through a mathematical formula and learn that every time he makes a payment greater than $100 in 6 months it reduces his risk. That insight helps determine the best course of action to take. BRE’s house the predictive models and algorithms for creating a collections strategy and assigning the strategy and channel to communicate with a particular customer based on these discoveries.  More importantly, they make this intelligence accessible to the collections agents who are making the day-to-day decisions that affect the performance of the portfolio.

 

Where are you seeing the most innovative approaches in collections?

KS: Credit card issuers have traditionally been at the forefront of collections strategies, because of the highly transactional nature of the asset. When dealing with a credit card, customer risk profiles can change daily based on the frequency and nature of the transactions. If someone makes a couple of large transactions that are atypical they will get a call immediately, because their buying behavior has changed in a statistically significant way.

While this is an example of a fraud alert, not a collections application, this is the paradigm credit card issuers operate in. If a customer does something that indicates they are a greater default risk today than yesterday, the issuer has to update its intelligence as frequently as transactions are made. That means applying data retention and real-time application of predictive analytics. The collections infrastructure I described earlier doesn’t change at all, it just “fires” more often.

Recovery collectors, or collectors of charged-off debt, are increasingly using advanced analytical techniques to determine when to collect debt, how to collect, and by whom (i.e. through a third party agency, internally, legal entity, etc.). Because they compete with other collections agencies to buy debt, the price they can offer is largely affected by how well they can collect on the back end. Their competitiveness in bidding is highly dependent on their sophistication in collections, so they have to evolve quickly.

Loan Science has also seen tremendous gains in the way student loans are managed and collected. This asset class is difficult to service because traditional analytically-driven collections strategies are not enough. There are a variety of work-out plans and rehabilitation programs available, so it is necessary to be innovative when determining the right terms to offer and to which students.

As auto lenders increasingly expand into subprime financing they find that those loans do not perform like a typical prime auto loan – protracted delinquency cycles, partial payments, payment extensions and due date changes are a way of life for a subprime loan. Auto lenders have to adapt to the signals in a subprime loan differently than a prime loan. If they are interested in that asset class they have to evolve their collections practices around that type of loan.

Lenders are increasingly looking at optimization but don’t have sufficient data to do so. What advancements are you aware of to optimize pre-charge-off collection efforts?

KS: Traditionally optimization of collections strategies is achieved through a directed series of champion/challenger tests that, more or less, call for collecting half of the portfolio in one way and the other half a different way, measuring the results and testing again.

This is an iterative approach to optimal, and has historically been time-consuming, but the process is faster than it has ever been with the use of BRE’s. They make it easy to program and implement all manner of strategies, reduce the duration of the feedback loop, and generate measurements and do analytics more quickly.

The most advanced organizations can do optimization of collections strategies in a less iterative fashion. They still do champion/challenger testing, but they need fewer iterations to get optimal results.

For example, what if you decide to change your collections strategies to delay issuance from 15 to 30 days of delinquency, and want to know if the result will be closer or further away from optimal?  

Quantifying the effect of those changes and understanding the linkages between strategies, staffing and losses is critical. If you can quantify how many collectors you can take out of the pool and the amount by which losses will increase as a result, optimization can be done in a really elegant analytical fashion, mathematically speaking.


How have big data and the availability of new analytics techniques changed collections?

The short answer is a lot. “Big Data” changes collections immensely because it provides more attributes to feed into predictive models. That means predictive models get better, insights into the portfolio become more trenchant and strategies become more targeted and more optimal.

That said, Big Data also means that data retention and data integration efforts become more difficult.  If data is  coming from many different sources (as is often the case with Big Data), then the meta-data will be  inconsistent, making the process of building models, data mining and gaining insight more difficult.  

I liken Big Data to a field of clovers. It increases the number of four-leaf clovers in the field, meaning the truly valuable insights available in the data become more numerous.  But Big Data also exponentially increases the number of three-leaf clovers in the field, meaning that mining  the data to find the really valuable insights can actually become harder if improvements in data retention and integration technologies don’t keep pace with the rate at which the data is made available.

Keith Shields is the senior vice president of Portfolio Management at Loan Science, a full-service portfolio management firm based in Austin, Texas (www.loanscience.com). He also serves as the chief analytics officer for Magnify Analytic Solutions, an analytic services firm based in Detroit (www.magnifyas.com). Previously, Shields was the director of Global Analytics at Ford Motor Credit, vice president of Portfolio Management at Credit Acceptance and mathematician at the Tomahawk Labs of the Naval Surface Warfare Center.

 

Karen Gordon is the public relations manager for Zoot Enterprises, located in Bozeman, Mont. You can follow her on Twitter @karenrgordon.

Subscribe Now

Access to authoritative analysis and perspective and our data-driven report series.

14-Day Free Trial

No credit card required. Complete access to articles, breaking news and industry data.